1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

How deepfake porn is killing our trust in tech

March 22, 2018

Months after fake "celebrity porn" reared its head online, there's a bad-tech aftertaste. Clearly our sense of truth is morphing, with clever dickies using AI to fool us. So how can we trust AI to tell us what's real?

https://p.dw.com/p/2unuy
Robin Wright in einer Szene des Kinofilms "The Congress"
Image: picture-alliance/dpa

We've been manipulating the way we want to see the world — and how we want others to see us in it — for almost as long as humans have been able to paint or take photos.

"Look back through Hitler, Stalin, Mao or Castro, all of these people manipulated photos in an effort to change history," says Hany Farid, a professor of computer science at Dartmouth College in the United States. "States do it, bad actors do it, criminals do it and hoaxers do it."

And it's not only dictators and demons. Computer technology has been used to manipulate images, video and audio for legitimate purposes too — although some examples may be ethically ambiguous. The movie industry uses computer-generated imagery (CGI) to bring dead actors back to life, like Peter Cushing (Grand Moff Tarkin) in Star Wars "Rogue One," models are routinely photoshopped in advertising, and we've made an art of auto-tuning the vocals of professional singers who can't sing.

Moving images are … moving

But video manipulation is a special case. A fake video can be incredibly compelling because it's a moving image. Take the phenomenon of "deepfake celebrity porn" videos, which popped up on the social media platform Reddit at the end of 2017.

Star Wars Princess Leia (1977)
Carrie Fischer will reportedly appear in "Star Wars: Episode IX" posthumously, but without CGI effectsImage: picture alliance / WENN.com

#ICYMI: Some dastardly individuals used a freely available software that deploys an artificial intelligence (AI) to paste the faces of known female actors like Emma Watson onto those of porn stars. The fakes were so good, technically speaking, that if you weren't paying proper attention (and who would be?), you may well have thought Watson had gone from feminist to freak — and you would have believed it.

"The first we saw of this was people screwing around putting Nicholas Cage's face in all sorts of movies, and I think most people would say 'That's pretty harmless,' but you could see the trend," says Farid. "So the next thing people did was this awful, awful thing of taking famous people and not famous people and creating involuntary pornography. And here's how you know how bad this content is: Pornhub and Reddit said they didn't want it on their platforms. How bad do you have to be to be banned from Pornhub and Reddit?!"

Screenshot Reddit Gruppe deepfakes: gesperrt

There's no doubt deepfakes are bad. But the technology is really good — good enough to trick our brains anyway.

Read more: Reddit takes down subforum on deepfake porn videos

From generics to fakes

"There's a generic model of a face that people have developed over time, and AI can be used in the form of deep learning networks to look at an image of a face and map it to a 3D model of that generic face," says Philipp Slusallek, a professor of computer graphics at Saarland University in south-western Germany and scientific director at the German Research Center for Artificial Intelligence.

Basically, the AI superimposes one face onto another, including gestures, and speech and eye motion.

"That is essentially what is being used to create these deepfakes," says Slusallek. "The original face is just being animated in different ways. That's why it looks so convincing, because in some sense it is the original face of that original actor or person."

And the technology keeps getting better and faster. It has made "tremendous progress," says Slusallek. "These people can move around, look to the side, and still the image is mapped onto the generic face model and back really well." 

In some cases the image is so convincing it may override what you think you know. Even if you think you know a thing or two about Emma Watson — you may indeed follow her as an feminist activist and as a result consider her the last person to move into porn — that prior knowledge may still not be enough to help your brain spot the fake.

Film Harry Potter und der Stein der Weisen
Emma Watson (right) — in more innocent days — as Hermine in the Harry Potter seriesImage: picture-alliance/United Archives/IFTN

"We're living in a time when there's a widespread and systematic abuse [of technology] by some people. They can generate synthetic faces, and people cannot tell the difference between the fake, which is generated by a deep neural network algorithm, and a real face," says Alexander Todorov, professor of psychology at Princeton University.

Uncanny context

When the software superimposes one person's dynamic facial expressions on to another it can be enough to let you believe what you see is true.

"Facial movements are critical here, because what we see in these dynamic images are cues that suggest agency and a mind behind it," says Todorov, who has researched the power of faces and first impressions. "When it's done well, it's easy to fool people."

You might think we should be better at telling the difference — and a theory called The Uncanny Valley suggests we are. It was proposed in 1970 by a Japanese roboticist called Masahiro Mori. The theory says that the more an object looks like a human the cuter it gets, but that there's this tipping point at which the image gets creepy and our brains reject it for being "wrong" somehow. That's when you fall into the valley.  

Empirical evidence of the Uncanny Valley is sketchy. But there is anecdotal evidence — some based on examples of CGI in the mainstream entertainment industry — to suggest it is true.

"Beowulf was a CGI film that was panned for all of the characters looking really lifeless and dead. And video game creators have a hard time making compelling characters unless they are cartoonized," says Christine E. Looser, a behavioral scientist and assistant professor at Minerva Schools at KGI (Keck Graduate Institute) in the US.

But that doesn't mean CGI is always creepy. Context can play a significant role.

"The problem with these fakes is that it's not CGI. It's a minuscule amount of CGI in an otherwise compelling scene," says Looser. "So I would imagine you're paying a little bit of attention to the face, but it's just adding to the other things you might be looking at, and maybe that's why it doesn't feel viscerally creepy."

The dark side of democratization

It almost sounds as though we want to be fooled.

And all this has been made possible by a process known — ironically — as the democratization of technology.

STAR WARS: EPISODE IV Grand Moff Tarkin
Actor Peter Cushing returned from the dead as a CGI recreation in Star Wars "Rogue One"Image: picture-alliance/dpa/Mary Evans Picture Library

You no longer need to understand how artificial intelligence or machine learning works. All you need to do is download the software, feed it some video, and click a button, presumably labelled "be creepy." Hey presto. You've created a compelling piece of video that 20 odd years ago was the preserve of expert film editors.

"We have taken the process of creating sophisticated fakes out of the hands of a relatively small number of highly skilled people and put it in the hands of an average redditor," says Farid. "The amount of power in these machine learning algorithms is made more or less freely available, and believing what we see, hear and read online is going to get pretty complicated."

Farid says we've held onto video as a more trustworthy medium because of the sophistication that used to be required to manipulate it. But that's all changed. It's no longer a "big stretch of the imagination" to think we might start seeing fake videos of President Donald Trump talking about "launching nuclear weapons against North Korea."

"Suddenly you can see very real threats," says Farid.

The misuse of this technology worries Slusallek too — although he says the technology itself is "neutral" and that there's a lot of good that can be done with it. For instance, there's some talk of similar technology being used to create CGIs of dead relatives to help people in the grieving process.

Todorov says the idea sounds "pretty crazy." But then he says people do visit graves to talk to their dead relatives "so this [idea] could help under certain circumstances, sure."

Another idea is to use the technology to help people with Asperger's syndrome as they may benefit from communing with familiar faces.   

"The thing that's changed though," says Slusallek, "is anyone can use this software now and because it happens in real-time, it is much easier to abuse." 

Technology vs. technology

Technologists currently believe the best way to track and verify fakes, at least those made with an AI, is to use another AI.

"Technology has always fought technology, weapons have always fought weapons, and biological agents have always fought other types of biological agents," says Farid. "That's part of the game."

But now, he says, we're deploying "one blackbox technology that is not well understood to fight another blackbox technology that is not well understood."  

"And I'm a little uncomfortable with a blackbox where you shove data into it and out comes an answer "yes" / "no" — whether that's for detecting fakes, predicting whether someone is going to commit a crime in the future, determining whether your car should stop or not stop," he says. "From an engineering, scientific and even a philosophical perspective, it is good to understand how these things work."

Read more: Conference debates how AI can shed its 'black box' image

So what else can we do? First, Farid says we need to think seriously about the ways we consume digital content — of all the articles shared on Facebook, says Farid, 80 percent are shared by people who have only read the headline. So we need to take more responsibility for our own actions online.

And second, the social media companies need to think hard about their own responsibilities too.

"Half of Americans think there should be some regulation on big tech. That is a dramatic shift from just a year ago," says Farid. "It's gone from Silicon Valley can do no wrong to 'Oh my God, there are some real problems here,' and these companies are basically like the tobacco industry."

 

DW Zulfikar Abbany
Zulfikar Abbany Senior editor fascinated by space, AI and the mind, and how science touches people