How can laws and regulations stop the digital march of deepfakes?

The production and consumption of deepfakes are skyrocketing. The chances of a manipulated political statement or pornographic image circulating online are increasing. Experts alarmingly predict that deepfake technology will only gain ground in the coming years. Calls for change arise from many different actors. Research agencies, (international) investigative services, victims, media and politicians all have their vision of what needs to be done to tackle deepfakes properly in the future. Evert Stamhuis, Professor of Law and Innovation at Erasmus School of Law, explains what he thinks is necessary to embed this technological issue legally.  

Several reports define deepfakes as unique forms of synthetic media. These manipulated means of communication convey a false message or spurious image. Therefore, a wide range of deepfakes exists: from photoshopped images to manipulated video or audio material. Initially, deepfake technology was developed for entertaining purposes and had a funny or ‘innocent’ undertone. However, the other side of the coin is now also becoming visible. “Artificial intelligence-manipulated imagery (photo and video) offers many opportunities for undesirable or downright harmful or dangerous behaviour. As artificial intelligence continues to improve, deepfakes are becoming increasingly difficult to distinguish from reality”, Stamhuis explains.  

Spread has nasty consequences  

“Once spread on social media or photo and video platforms, such fake messages can have nasty consequences. There are plenty of examples of the spread of deepfakes that have damaged individuals or weakened political alliances”, Stamhuis adds. Consider, for example, the mock video distributed in March 2022 in which Ukrainian President Zelensky ordered his troops to lay down their arms. Besides twisting political messages, deepfaking pornographic material is becoming increasingly popular. In 2020, 93% of the deepfakes detected were porn, according to Sensity, a company researching the spread of deepfakes. 

For a long time, the necessary deepfake software that uses artificial intelligence to manipulate material could only be found in luxury film studios. Today, such software is much more accessible via the internet.   

Current legislation provides guidance  

In early 2022, researchers from Tilburg University published a report on the nature, extent and damage of deepfakes at the request of the Scientific Research and Documentation Centre (WODC). The demand for this study arose from major concerns of the House of Representatives about the increase in deepfake production and its consequences. The report states, among other things, that current Dutch criminal law already provides sufficient tools for regulating deepfakes.  

For instance, Article 139h of the Penal Code - which has been in force since 2020 - prohibits the making, possession and disclosure of revenge pornography. The question is whether the Public Prosecutor's Office and the courts also include the manipulation of visual material under this article, as the letter of the law refers to ‘images of a sexual nature’ and is somewhat arbitrary. Although including deepfakes in revenge porn is still a grey area, Stamhuis says this does not apply to child pornography. He points to Section 240b of the Penal Code: “Our legislator included virtual child pornography in the criminalisation of distribution in Section 240b. Thus, child pornography deepfakes are also punishable.” 

In addition, according to the researchers, deepfakes can be tackled using existing rules on fraud, deception and invasion of privacy. Also, the General Data Protection Regulation (GDPR) could potentially ban deepfake production at the European level altogether, given that under this law, data may only be used for the purpose it was collected. Nevertheless, Stamhuis sees objections here: “This is a rather impractical idea as the enforcement of the GDPR has been put in the hands of the Personal Data Authority, which has too little capacity and has only administrative law tools at its disposal.” 

To monitor technology, we depend on technology  

According to Stamhuis, tackling deepfakes using legal tools is problematic if the source of the fake material is outside the European Union. “In many cases we depend on the cooperation of digital platforms. Therefore, they increasingly face the expectation or obligation to remove illegal content. Monitoring for illegal content is done using algorithms. Still, that software does not guarantee that all deepfakes are recognised, apart from the question of which deepfakes are illegal and which are mainly unpleasant or undesirable.” 

Additionally, Stamhuis looks at the proposed European artificial intelligence regulation (AI Act). This does not expand on the fight against illegal deepfakes. It does require individual users to be informed the moment they communicate with a 'fake' character, for example, on a commercial website. “A chatbot with a realistic video presentation must, therefore, then be provided with a consumer warning. This regulation assigns supervision and enforcement to an AI authority at the national level”, Stamhuis explained.   

Researchers emphasise consumer  

So, there are several legal options for embedding deepfakes. However, it is the enforcement of these rules that needs fixing. Enforcement needs to catch up due to the large scale of the problem. Therefore, the Tilburg University researchers suggest reducing the enforcement pressure by banning the production, offering, use and possession of deepfake technology in the consumer market. After all, current law does not restrict the making and offering of deepfakes but their service for specific purposes. Focusing on the consumer side would reduce the pressure on the enforcement chain.  

“You made it. You fix it.”  

The issue of banning deepfake technology initially arose because of the rising number of pornographic deepfakes. The problem extends further, however, and Stamhuis sees a pain point, especially with deepfakes that are not sexual but undermine our concept of truth. This undermining, he says, has implications for the way we establish 'truth', for example, in court cases, political debates and science. “Unfortunately, banning a special market segment is ineffective. I expect more from a technological toolbox to expose deepfakes. That would also put the problem back at the source: the technology sector. You made it, you fix it”, Stamhuis says. Moreover, he argues that social awareness about deepfakes and the fact that they are not just funny is highly desirable: “It is playing with fire if you like or link them.” 

Professor
Evert Stamhuis, hoogleraar Law and Innovation

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes