Though WhatsApp gets a lot of slack for the way they deal with fake news, a new report seems to suggest the guys are not just scratching their heads and mulling at a solution. It turns out WhatsApp removes a number of accounts that spread false news and abusive content on the worlds most popular messaging service.
WhatsApp held a press conference in India, and in this press conference, they informed the press that each month they remove two million accounts from the service. 20% of those two million accounts are actually removed during registration. WhatsApp did not disclose how they flag accounts as potential abusers during registration, which is a bit strange. Maybe they use their database to see numbers that have been reported and removed before but we will never know.
How do they do it?
WhatsApp is using a mix of human intervention along with machine learning to delete these accounts.
- 25% of the banned accounts are removed by humans
- 75% removed by algorithms that seek out malicious activity
WhatsApp went on to specify some of the type of content that will get users banned:
Some may want to distribute click-bait links designed to capture personal information, while others want to promote an idea. Regardless of the intent, automated and bulk messaging violates our terms of service, and one of our priorities is to prevent and stop this kind of abuse.
Considering that WhatsApp had 1.5 billion users the last time the numbers were announced, this means every month they remove about 0.133% of accounts every month. Doesn’t sound as significant as it did when you heard 2 million right? In fact, that number is so small that if we were to multiply it by 12, they remove around 1.6%/year. Maybe this is why they felt reporting the number of accounts is better than stating any percentages.