Site icon Stuff South Africa

Millions of user messages are reportedly reviewed by WhatsApp moderators

whatsapp

As much as WhatsApp (and its parent company Facebook) received endless criticism of late over its more obviously skeevy privacy practices, we’ve always given it credit for its more laudable efforts, namely its end-to-end encryption on sent and received messages. We recently praised it for updating this function to have it include cloud-stored chat backups.

But now even that speck of positivity has been sullied. It has recently been discovered that the messaging app’s encryption service isn’t so private after all, with millions of user messages being reviewed by WhatsApp moderators.

WhatsApp giveth and WhatsApp taketh away

On Wednesday, ProPublica (a nonprofit newsroom self-described as “investigat[ing] abuses of power”) released an exhaustive report detailing WhatsApp’s privacy promises compared to its actual practices, paying particular attention to the messenger service’s end-to-end encryption feature.

With said encryption feature in place, for the longest time users have (understandably) believed their messages to be entirely safe from prying eyes, even from WhatsApp and Facebook themselves. ProPublica’s findings show a pretty blunt contradiction to this — around 1,000 people employed by Facebook moderate messages flagged by users as potentially harmful or abusive.

Users on the receiving end of messages can report them for a number of reasons: scams, fraud, child sexual abuse material (CSAM), and other potentially illegal content. From there, the message is lumped together with the sender’s previous four messages in that chat thread and sent off to WhatsApp’s review system. 

These messages are sent off automatically into a ‘reactive’ queue and by AI into a ‘proactive’ queue. The former relies on human moderators to review, and the latter uses AI to sort through a user’s unencrypted metadata, such as their name, phone number, profile picture and more. Reviewers apparently scrutinise millions of messages a week.

But, it’s still encrypted

It is important to note that the end-to-end encryption between the sender and receiver is not broken during this process. WhatsApp still cannot read messages outside of the few that are sent to Facebook after they’re reported.

Members of the moderation team (WhatsApp doesn’t actually call it that, but then, it wouldn’t?) examine information in both queues, looking for any policy violations. They can then do nothing if the flagged message turns out to be fine, ban the offending user if it breaks policy, or put them on a watch list if the message doesn’t technically break any rules but is still suspicious. 

The messaging app could, instead of having people go through messages themselves, utilise an AI system instead. And while there’s nothing firm (yet) to indicate that WhatsApp or Facebook are reviewing even unflagged messages, the point is they could be. Having been less than honest about their review system in the first place, we’re not going to blame anyone for assuming they are

A snag in the plan

What’s more is that, according to ProPublica, the review process is rife with blunders as well. For example, the review service spans across 180 countries, meaning messages come in in a variety of different languages. According to a moderator, there are often no native speakers able to examine messages in certain languages. Rather, reviewers have to make use of Facebook’s translation tool, which can apparently be so inaccurate that it sometimes mislabels messages in Arabic as being in Spanish. What’s more, is that it doesn’t, or rather can’t account for slang, innuendo, or political context. 

The automated process itself has its share of errors, often reporting companies selling razors or bras as offering weapons or as a “sexually oriented business” respectively. There are also instances of harmless photos being reported, such as a picture of someone’s child taking a bath sent to a family member flagged as CSAM. Furthermore, it’s left to humans to decide whether AI flagged material is truly breaking policy, such as by weighing up whether a severed head in a picture is real or a Halloween prop.

The system is abused constantly too, says another moderator. People can prank their friends by changing group names and then reporting them. “At the worst of it, we were probably getting tens of thousands of those,” they said. “They figured out some words the algorithm did not like.”

Now, while the intentions of this system aren’t particularly offensive (if Facebook was using the data it dug up through flagged messages for ad-tracking that would be heinous) it points to a lack of transparency from WhatsApp to its users.

As Ars Technica so poignantly points out, WhatsApp’s security and privacy page reads, “WhatsApp has no ability to see the content of messages,” which sneakily leaves itself open to interpretation. Not really something you want from this kinda thing. The messaging app is already in hot water with the EU over transparency with its users, and this certainly doesn’t help its case.

Exit mobile version