Deepfake and other ‘synthetic media’ will be the next wave of online content causing concern, but there’s no need to rush to create new laws to cope.
A new report funded by the Law Foundation cautions against rushing to develop new laws to respond to synthetic media. Instead, the authors say there is already a long list of laws that cover the issue, including the Privacy Act, Copyright Act and the Harmful Digital Communication Act.
The SMC asked experts to comment on the report and deepfakes more broadly.
Dr Amy Fletcher, Associate Professor of Political Science and International Relations, University of Canterbury, comments:
“Barnes and Barraclough make a substantial contribution to the emerging debate about ‘deep fakes’ and synthetic media. They are correct to caution against knee-jerk regulatory responses.
“As a political scientist, I think we also need to consider synthetic media within a broad geopolitical context. Around the world, we see that democracy is under intense pressure — confronting challenges arguably not seen since the 1930s — with political polarisation and extremism on the rise, while trust in institutions, politicians, and the mass media continues to decline. These issues are not necessarily new, but they are increasingly urgent and pervasive, particularly as the speed and reach of advanced media technologies ramp up emotions and decrease response times.
“In this sense, the prospect of rapid proliferation of deep fakes in the political/electoral sphere does concern me. I think the question posed in Scientific American in 2017 remains spot-on: will democracy survive big data and artificial intelligence? Our track record to date suggests that we still do not know how to overcome the downsides of social media tools such as Twitter or Facebook, let alone deep fakes.
“As the United States gears up for what promises to be a very hard-fought presidential election in 2020, the potential use of synthetic media by both domestic and international actors who have a vested interest in undermining the integrity of the process and further destabilising American institutions is a legitimate and abiding concern, both for Americans and I expect the wider world.”
No conflict of interest.
Dr Donald Matheson, Associate Professor in Media and Communication, University of Canterbury, comments:
“Apart from the title, Perception Inception, this is a good report. It warns us to get ready for more and more sophisticated fakery in the production of video and audio, some of which may be used to try to affect politics and public life. But it also warns us not to get carried away. Fake images have been around for as long as journalism (if not before). As the report notes, we may need some changes to privacy guidelines, to clarify that individuals can reasonably expect that no one will create fake images of them without their consent and what happens when those images are offensive.
“Factual media producers and distributors – and this needs to include social media platforms – need to develop protocols to mark when a video or audio file has been altered to the point that claims about the real are affected. But I think the report’s authors are right that we should not reach for new laws to tackle each new twist of digital technology and should be cautious about giving censors or police greater powers than they already have. We do certainly need mechanisms to tackle what they call ‘determined and sophisticated bad actors’ (p.19), which means exposing and sometimes prosecuting those who shout fire in a crowded theatre.
“To help us do that, I think the greatest problem is one that the lawyers are less useful in helping with: we need a plurality of public interest platforms of information. The financial plight of journalism – the group of people with the best-honed skills in exercising scepticism at high speed, checking the provenance of material on the fly and doing all this with the community good in mind – is a key problem we need to collectively address. Much-needed local civic structures and professionals of that kind have been weakened as a few global platforms of information sharing have risen to dominate our daily lives. If we’re to regulate, I would regulate at the structural level to foster a media environment in which misinformation finds it harder to gain purchase.”
No conflict of interest.
Dr Minh Nguyen, Senior Lecturer & Academic Advisor, School of Engineering, Computer & Mathematical Sciences, AUT, comments:
“From what I understand, ‘deep fake’ is a combination of the two of the most powerful models of Deep Learning: Convolutional Neural Network (CNN) and Recurrent Neural Networks (RNN).
“Deep Learning models (simulate similar neural networks in the human brain) have proved extraordinarily efficient at learning the inherent features of data (images, sounds, and texts). CNN is used to implement the Face Swapping part, it detects faces and many other features of the two people, learns them and stores what they learnt in a very deep artificial neural network. After that, the machine can acquire the face of one and map all the expression on the other person’s face.
“For the sound part, they use RNN, which is widely used for natural voice generation, originally used in Text-to-Speech Systems and AI-based personal assistant such as Google Duplex, Apple Siri, Amazon Alexa, etc. The idea is, if provided with enough data (e.g. voices), the machine can learn very well the speech synthesis and mimics specific human voice with the same tone, sound, and accent. It is very hard to determine which one is real and which one is generated.
“Deep fakes are potentially dangerous, but creating realistic video is not easy for most people. However, the voice generation is easier to achieve and could be used for scamming over the phone to replace today’s scamming with email or SMS text.”
No conflict of interest.