Algorithms That Detect Hate Speech On Social Media Could Be Biased Against Black People

Farai Mudzingwa Avatar

What happens when Artificial Intelligence (AI) isn’t so ethical? These are questions that have been asked before but we haven’t necessarily had to answer those questions until now

Two recent studies have shown that AI trained to identify hate-speech contains racial biases. One study showed that leading AI models used to process hate speech are 1.5 times likelier to flag tweets from African Americans as hateful or offensive and 2.2 times likelier to flag tweets ” written in African American English.”

Another study is said to have found bias against back speech in five academic datasets that totalled around 155 800 Twitter posts.

The issue around AI and policing the limits of what it can do has always been tied to the fact that AI could amplify human biases more so the biases of those at the forefront of developing these AI technologies.

It’s not entirely clear if these are the exact same content moderation systems used by Facebook, Twitter and Google but some have said that the flaws may be similar since tech giants turn to academics when trying to enforce better standards around speech. The fact that these flaws were found by academics in widely used data sets suggests the same flaws could present in popular social media sites.

The bigger problem with many implementations of AI will be trying to avoid flawed human decisions being reflected in the algorithms created. For more complex and context-specific issues such as hate-speech, it might prove better to leave a significant portion of the work outside the hands of AI and ML. Maybe AI will improve and one day be able to tackle these issue with less controversy, what are your thoughts?

,

What’s your take?

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2023 © Techzim All rights reserved. Hosted By Cloud Unboxed