Image credit: Pixabay
In this internet age, thanks to social media (well, and other platforms), people have somehow gained the confidence to exercise their right to ‘freedom of speech’. I could be wrong though because people might have always exercised that right, only that now the internet has made it easier for us to access that information.
But anyway, that’s not the point. The point is while we out here celebrating our ability to say whatever we like and whenever, we have in the process infringed a number of other rights. In fact, we have grown to be mean – especially to strangers or people we know we are not likely to meet in this lifetime at least.
Due to this key board warrior syndrome that has gripped the most of us, most social media platforms are ever trying to find ways in which to curb the use of offensive or foul language on their sites.
Instagram recently adopted an AI system in order to ensure that their platform is in their own words “kept a safe place for self-expression”. Actually, it only makes sense for Instagram to follow Facebook’s footsteps considering that it was bought by Facebook and who knows, maybe WhatsApp will be next (but I’m pretty sure that would be one heck of a challenge).
Because we have written quite a number of articles on AI which you can refer to, I’m not going to dive into that type of detail. But the article on this link covers the basics of AI.
Basically, what Insta did was hire a team of contractors in which each person from the team would determine whether a specific comment from their sample of about 2 million comments was appropriate or not. If not, they would the shelf it into the different ‘offence categories’ according to Instagram’s Community Guidelines. To help the process become thorough, each of these ‘raters’ had to be at least bilingual and each comment had to be rated at least twice.
An algorithm was then built based on those results, tested and tweaked in order to improve it’s efficiency. After being satisfied, they then launched this new feature.
However so far, the feature is only available in English (next will be Spanish, Portuguese, Arabic, French, German, Russian, Japanese, as well Chinese) and we all know how long it’s going to take for it to be available in Shona, Ndebele or any other local language – which means we still have to hang in there when it comes to trolls being said in vernacular.
Also, one has to note that the fact that they can still see their nasty or racist or whatever else abusive comment, doesn’t necessarily mean they have outsmarted the system because Insta has set it in such a way that the person who typed it will still see it on their phone while everyone else can’t. Maybe one can only figure it out when they notice that no one is engaging them – which will be a difficult factor to use in determining this in Zim considering how you can post something and still be ignored for days!
Still on this, I might as well mention that Insta also introduced an anti-spam tool which uses AI as well, last year in October. They only officially mentioned this now, along with this feature that blocks offensive language for reasons best known to themselves.
However in all this, the dilemma would be: Which is better, an aggressive system that will likely block stuff that it shouldn’t, or a passive one, which will probably allow stuff that it should block?
Which one would you prefer?