Stuff South Africa

Can the law stop internet bots from undressing you?

Imagine that you upload a photograph of yourself on holiday to your favourite social media platform. You are dressed in a swimsuit and you are smiling at the camera. Now imagine later coming across this image while scrolling through your newsfeed. You recognise your face and the background and it looks like your photo, but in this image, you are completely naked. There are some inconsistencies – you do not recognise the body in the image – but it is convincing nonetheless.

This might sound like a scene from a Black Mirror episode but is in fact a real possibility thanks to tools available on the social media app Telegram, which allows users to upload innocent images of a (clothed) person, and request that the person in the image is “digitally undressed” for a fee. Telegram has more than 400 million active monthly users.

While Telegram operates predominantly as a messaging app, it facilitates autonomous programmes (referred to as “bots”), one of which is able to digitally synthesise these deepfake naked images.

Deepfake detection company Sensity recently published research into Telegram. They found that 70% of Telegram users use its deepfake bot to target women and that, as of the end of July 2020, at least 104,852 fake nude images had been shared in a “image collections” channel available on the app. The number of user-requested images which have been publicly shared is likely to be much higher. The ease with which such “image manipulation” may be carried out without the knowledge of its victims is alarming.

So: is the use of deepfake bots to produce pseudo naked images legal?

Underage pictures

The Telegram bot has been linked to reports of images which appear to be of underage girls. In this case – if the person in the image is underage – the legal position is clear. Images of real children which are altered to appear nude or sexually explicit are internationally unlawful. The Convention on the Rights of the Child, ratified by 196 countries, requires parties to the convention to take steps to protect children from being sexually exploited and being used in the production of pornographic material.

As long as Telegram removes reported indecent images of children, Telegram is not culpable under current international legal frameworks if a user uses the deepfake bot to produce an indecent image of a child. But it is doubtful that this law makes the bot itself unlawful.

In the UK, international obligations to protect children from sexual exploitation are bolstered by laws prohibiting the production of sexual pseudo-imagery, such as a photoshopped images of a young person appearing naked. The Protection of Children Act (1978) prohibits the creation and distribution of such an image, and Section 160 of the Criminal Justice Act (1988) also makes it an offence for a person to have a pseudo-image portraying an indecent image of a child in their possession.

What about adults?

For women and men over the age of 18, the production of a sexual pseudo-image of a person is not in itself illegal under international law or in the UK, even if it is produced and distributed without the consent of the person portrayed in the image.

This is, as usual, a case of the law playing catch-up. International laws created to protect privacy do not necessarily protect people from this type of abuse. Article 8 of the European Convention on Human Rights, which provides a right to respect for an person’s “private and family life, home and correspondence”, has been used as the basis for domestic laws throughout the UK and Europe to protect photographs, but only if the original image remains unaltered.

You could get a nasty shock. Wichayada Suwanachun/Shutterstock.com

The Telegram bot therefore exploits a legal gap when it comes to deepfake imagery of adults (Telegram did not respond to our questions about the bot and the images it produces, nor to Sensity’s enquiries as of the publication of the report). While there are laws that can protect adults from sexual exploitation and abuse via social media, these laws are not as robust as those which protect children. They do not apply to images produced by Telegram AI-bots.

For example, in the UK, the phenomenon of revenge porn – non-consensual sharing of naked and sexual images – is prohibited under the Criminal Courts and Justice Act (2015). But this does not cover situations where an original or standard image is altered to appear sexual or naked. The distribution of a Telegram-type image would not be captured under the revenge pornography provisions, even if the person creating the image meant to cause harm and distress to the victim. Under these provisions an essential component is that the perpetrator has used an unaltered image.

For an altered or deepfake image of an adult to fall foul of the law, other elements must be involved. The created image must be regarded as “grossly offensive” (contravening section 127(1) of the Communications Act 2003), and it must be proven that the pseudo-image was sent for the purpose of causing “needless anxiety”. To prove this offence, prosecutors must establish a hostile motive towards the victim. If this type of image was sent for a joke, for example, this is not likely to contravene the act. The elements of this offence are notoriously subjective and difficult to prove.

Given this context, it is perhaps unsurprising that such acts are rarely reported, let alone investigated. Prosecutions for this type of offence are rare, despite government guidelines stipulating that this type of offence can be serious.

Regulating new cyber-crime

The regulation of technology requires the law to keep abreast of rapidly changing and highly complex trends. Telegram is only one example of the ever-growing interest in “deepfake” images and video. It is also likely that they will become increasingly realistic.

The UK is considering legislation whereby social media platforms could face fines for facilitating such images. The government has proposed to make companies such as Telegram take more responsibility for the safety of their users and tackle harm caused by content or activity on their service. But progress has faltered, and the legislation may not be passed until 2023.

This is unfortunate. Apps which facilitate or produce fake images for general consumption are a dangerous trend which will not dissipate without considerable change to the current legal framework.

Exit mobile version