You Shouldn’t Trust Artificial Intelligence With Making Big Decisions, Thoughts From AfricaCom 2018

Rufaro Madamombe Avatar
hands shaking

Artificial intelligence is a hot topic here at AfricaCom 2018 in Cape Town and we just took a look at whether you would trust artificial intelligence to be your lawyer? That discussion left me with a fundamental question. Should we entrust artificial intelligence to make big decisions?

AI is quite amazing, you give it data, it figures out patterns in the data, learns from it and as time goes on it keeps learning and only gets better with time. So then why shouldn’t we trust it to make decisions that may even impact our lives?

Because AI can also be biased!

The one thing that makes most people biased is emotion. Think about it, if you like pizza, then you are more likely to order pizza when it’s your night to cook supper while on a trip with family or friends. Now, that’s not a big deal as the other people with you will actually get to see what they are missing out on but if you are deciding on whether someone goes to jail or not then it really matters whether you are biased or not.

You’re probably thinking but Rufaro, artificial intelligence can’t be biased because it has no emotion. It’s just logic and makes decisions based on what is fair. And there is the problem, how does it know what is fair and what isn’t fair. Who taught it? Whoever is the teacher, they most likely have their own biases and that might cause them to select biased data to give to artificial intelligence to learn from.

Sometimes, it’s not deliberate like the teacher feeding it biased data. It could be from experiences that the AI has in the world that end up causing it to be unfair when making decisions. Take a look at this example of how artificial intelligence can be biased.

How artificial intelligence can be biased

If you take out your phone and type out “The doctor said ” chances are that one of the predicted words will be “he” and you’ll most likely not see “she”. Does that mean that there are no female doctors? Absolutely not, but up until recently, that had been the norm that you’d associate doctor with a man and likewise you’d associate nurses with women.

Now, text prediction is just like our pizza scenario earlier, it’s small decision that most likely won’t hurt anyone even though it clearly shows bias. So let’s take a look at AI being unfair while making big life-altering decisions in the legal system.

Earlier this year, Wired reported that in America crime predicting algorithms were more likely to incorrectly categorize black defendants as having a high risk of committing a further offence. The researchers who uncovered this went as far as to say that the algorithms were as good as an online poll of people who had no training in criminal justice.

Both the text predicting and crime predicting examples really do make that computer science saying sound so true: garbage in garbage out.

If AI learns from us that doctors are mostly men then it will naturally become biased towards the same thinking. Likewise, if it learns that society thinks most blacks are criminals then it will most likely mark those who have committed an offence as being at a high risk of committing another one regardless of other factors.

So should you really not trust AI to make big decisions?

As with most things, there are at least 2 extremes. Here we took a look at the worst case of bias which would naturally lead us to say that we shouldn’t trust ai with big decisions. However, mankind has somehow found a way to get one of the beings in this universe that is very likely to be biased to handle people’s lives in the justice system and be as fair as a person can be.

Granted the justice system is not perfect but in most cases it is fair. To increase the fairness of the system, we could remove the human element and replace it with AI. And to make sure we can trust the AI just as much or even more than we do the people, maybe we should train AI with a select set of data: just the same way we raise our children.

And hopefully, that data is good will cause the artificial intelligence to be biased towards fairness.

Really interested in hearing your thoughts on this topic. Do you think that we should trust AI with big decisions? What would it take to get to trust AI fully with decisions that impact our lives? Feel free to leave a comment below and let’s continue the discussion.

Oh and don’t hold it against me if you have “she” as one of the predicted words in our doctor example, as things are slowly evolving and you just might be using one of the keyboards that aren’t that biased anymore lol.

6 comments

  1. Anonymous

    Unsubscribe me please

    1. Rufaro Madamombe

      Hi,

      Sorry, I’m not sure what exactly you want to be unsubscribed from.

      If you don’t want to receive the notifications anymore, you can follow these instructions to unsubscribe from getting the notifications: https://documentation.onesignal.com/docs/unsubscribe-from-notifications

      If not, please send me an email on news@techzim.co.zw so that I can better understand how to help.

  2. Anonymous

    You can design a car badly (inefficient, mechanical failures) or well. However, at this point in time a car in generic terms is a good solution to a particular set of transport problems.

    Is AI with bias not the same? Poorly implemented AI, yes, but not an argument that AI is generically not a good solution to a set of problems.

    Should we trust AI? Progressively based on performance. At one point there was a human walking in front of a car to ensure no accidents in the brave new world of the automobile. We’re about there.

    1. Rufaro Madamombe

      Hey, thanks for sharing your thoughts.

      True, AI is a good solution to some set of problems.

      Just that as it is, like you said if we poorly implement it then it won’t be good at solving other problems.

      And when it comes to dealing with outcomes of people’s lives then this whole discussion of ethics comes into play.

      We are going to get to a point where AI can be trusted just like a human (when it comes to making life altering decisions) but for now we’re just not there yet.

  3. E.Mhetu

    In what sense do we say this system is Artificially intelligent?. Artificial intelligence is the the theory and development of computer systems that are able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. So in other words l think we should trust artificial intelligence in making big decision because machines they don’t have feelings.

  4. E.Mhetu

    l love your blog, keep it up this is nice

2023 © Techzim All rights reserved. Hosted By Cloud Unboxed