Artificial intelligence is a hot topic here at AfricaCom 2018 in Cape Town and we just took a look at whether you would trust artificial intelligence to be your lawyer? That discussion left me with a fundamental question. Should we entrust artificial intelligence to make big decisions?
AI is quite amazing, you give it data, it figures out patterns in the data, learns from it and as time goes on it keeps learning and only gets better with time. So then why shouldn’t we trust it to make decisions that may even impact our lives?
Because AI can also be biased!
The one thing that makes most people biased is emotion. Think about it, if you like pizza, then you are more likely to order pizza when it’s your night to cook supper while on a trip with family or friends. Now, that’s not a big deal as the other people with you will actually get to see what they are missing out on but if you are deciding on whether someone goes to jail or not then it really matters whether you are biased or not.
You’re probably thinking but Rufaro, artificial intelligence can’t be biased because it has no emotion. It’s just logic and makes decisions based on what is fair. And there is the problem, how does it know what is fair and what isn’t fair. Who taught it? Whoever is the teacher, they most likely have their own biases and that might cause them to select biased data to give to artificial intelligence to learn from.
Sometimes, it’s not deliberate like the teacher feeding it biased data. It could be from experiences that the AI has in the world that end up causing it to be unfair when making decisions. Take a look at this example of how artificial intelligence can be biased.
How artificial intelligence can be biased
If you take out your phone and type out “The doctor said ” chances are that one of the predicted words will be “he” and you’ll most likely not see “she”. Does that mean that there are no female doctors? Absolutely not, but up until recently, that had been the norm that you’d associate doctor with a man and likewise you’d associate nurses with women.
Now, text prediction is just like our pizza scenario earlier, it’s small decision that most likely won’t hurt anyone even though it clearly shows bias. So let’s take a look at AI being unfair while making big life-altering decisions in the legal system.
Earlier this year, Wired reported that in America crime predicting algorithms were more likely to incorrectly categorize black defendants as having a high risk of committing a further offence. The researchers who uncovered this went as far as to say that the algorithms were as good as an online poll of people who had no training in criminal justice.
Both the text predicting and crime predicting examples really do make that computer science saying sound so true: garbage in garbage out.
If AI learns from us that doctors are mostly men then it will naturally become biased towards the same thinking. Likewise, if it learns that society thinks most blacks are criminals then it will most likely mark those who have committed an offence as being at a high risk of committing another one regardless of other factors.
So should you really not trust AI to make big decisions?
As with most things, there are at least 2 extremes. Here we took a look at the worst case of bias which would naturally lead us to say that we shouldn’t trust ai with big decisions. However, mankind has somehow found a way to get one of the beings in this universe that is very likely to be biased to handle people’s lives in the justice system and be as fair as a person can be.
Granted the justice system is not perfect but in most cases it is fair. To increase the fairness of the system, we could remove the human element and replace it with AI. And to make sure we can trust the AI just as much or even more than we do the people, maybe we should train AI with a select set of data: just the same way we raise our children.
And hopefully, that data is good will cause the artificial intelligence to be biased towards fairness.
Really interested in hearing your thoughts on this topic. Do you think that we should trust AI with big decisions? What would it take to get to trust AI fully with decisions that impact our lives? Feel free to leave a comment below and let’s continue the discussion.
Oh and don’t hold it against me if you have “she” as one of the predicted words in our doctor example, as things are slowly evolving and you just might be using one of the keyboards that aren’t that biased anymore lol.
Quick NetOne, Telecel, Africom, And Econet Airtime Recharge
If anything goes wrong, click here to enter your query.