Site icon Stuff South Africa

Meta assembles a taskforce to eradicate Instagram’s child-abuse content issues

Instagram Header (meta)

Mark Zuckerberg’s Meta is up to its neck in hot water. The social media parent of WhatsApp and Instagram is facing scrutiny from the Internet after a report from The Wall Street Journal (TWSJ) and the Stanford Internet Observatory (SIO) was released, condemning Instagram’s explore algorithm for ‘promoting’ the sale and distribution of self-generated child sexual abuse material (SG-CSAM).

Meta, of course, has responded swiftly, the severity of the accusation leaving no room for error on Meta’s, and subsequently, Instagram’s part. Soon after the report was published on Wednesday, Meta established a ‘taskforce’ to investigate (and hopefully root out) the ever-growing “networks” that advertise and sell SG-CSA materials.

Taskforce worthy

The SIO, which was working on a tip from TWSJ, said the platform is “currently the most important platform for these networks,” with features like the far-too-accurate algorithm and direct-message functionality enabling the continued growth of these networks, with some even being run by minors.

Meta’s response to TWSJ is, predictably, fairly nondescript. “We’re continuously exploring ways to actively defend against this behaviour, and we set up an internal task force to investigate these claims and immediately address them,” said a spokesperson for the company.

So Meta is doing something about it, right? Well, sure.

Though if it weren’t for Instagram’s sub-optimal defense systems, it may have encountered this problem in the first place. Before the report was released, users searching terms related to SG-CSAM were greeted with a warning these “these results may contain images of child sexual abuse,” and a follow-up button allowing users to “see results anyway.” Instagram has since removed the ability to “see results anyway” after being contacted by TWSJ. How… noble.


Read More: Is 13 too young to have a TikTok or Instagram account?


Essentially, this proves that Instagram’s algorithm is powerful enough to detect imagery that may be harmful to its subjects and others around them, though its apparently unable to do anything about it.

In a bid to take as little blame as possible, Meta’s statement referred to a “technical issue” that prevented reports of SG-CSAM from reaching viewers, which has now been fixed. Additionally, it updated its guidance policies on content reviews, which should better help the company identify and remove predatory accounts.

It went on to say that it had already removed a number of search terms that the SIO claimed were being used to find SG-CSAM.

Source

Exit mobile version