Article Thumbnail

The Murky Future of Deepfaked Everything

How long before deepfakes move beyond celebrities and phony porn, and a stranger is taking fake evidence of a crime you didn’t commit to the police?

Hao Li didn’t anticipate that his curiosity about facial animation and AI algorithms would portend a Black Mirror future. 

A decade ago, he was a computer science PhD fascinated by the prospect of accurately recreating humans in digital space. Since then, he’s worked as a researcher for legendary Hollywood FX firm Industrial Light & Magic, started his own firm specializing in AI-animated avatars and snagged the directorship at a cutting-edge tech lab run by the University of Southern California. 

These days, he’s also a reluctant expert on the dark future of deepfakes. Manipulated with help from AI software, deepfakes use swapped faces and sometimes audio substitution to create videos that look convincingly real. What started as experimentation for “some cartoon characters, basically” morphed, over the course of 10 years, into a technology that could permanently warp the idea that seeing really is believing. “I was taken a bit by surprise by the whole concept,” he explains. “The motivation of the work was, how do we make the process of animating a person faster, cheaper and more scalable?”

At that point, creating a realistic avatar required conventional CG and VFX work, which is too expensive and time-consuming for average people. But as Li and his peers in the field made breakthroughs, he realized a different purpose was taking hold of the public’s attention. “Suddenly, over the last few years, we got approached by media and started discussing the fact that, ‘Well, you can now manipulate video of politicians to make them say whatever you want,’” Li says. 

There was a time when deepfakes were mostly discussed within some pornography circles online, with amateurish works that gave away their artifice thanks to distortion and artifacts within the videos. But these days, both the technology and the users who create deepfakes have exploded into the mainstream. 

Mainstream media outlets have reported on the digital phenomenon with more than a touch of moral panic — maybe rightfully so, given the implications. An NYU report that dropped last month pointed to deepfakes as a major threat to the democratic process in the 2020 elections. The rise of AI-assisted fake audio, which can now convincingly imitate voices, adds even more pressure. It’s a trend, too, tech companies are investing in; juggernaut Adobe, for example, is developing a “Photoshop for audio.” 

It feels like a perfect storm for a future in which simulacra will have power over our physical lives, making us vulnerable to a host of ailments ranging from fake(r) news to chaos in the criminal justice system to intense cyberbullying. Security experts are already warning that deepfakes could have destabilizing effects on nation-states and democracy around the world. 

Yet, as Li points out to me, there’s an absurd silver lining to the madness: “Deepfakes are forcing the whole question of, you know, what can you even trust?” he says, suggesting a shrug with his tone of voice. 

Similar alarms about the impending fall of society rang out with the cusp of the internet, suffrage and even the printed word. The bigger threat might just be a bumped-up exhaustion: of information, of communication, of the battle over truth. “I think the overstimulus, tension and unreality that people experience through social media really is a factor for burnout and depression,” Reef Karim, a mental health expert in L.A., told me earlier this summer. 

We’re going to have to cope with a technology that will be used to humiliate everyone from global leaders to everyday kids. It’ll coarsen the generational gap between the tech literate and the gullible, too. Consider deepfakes the final exam when it comes to the limits of our credulity. For better or worse, it’s becoming clearer that we’ll need a human touch to manage the spread.

As in the early days, the most prominent use of deepfakes today is in pornography. Videos featuring deepfaked versions of Taylor Swift, Selena Gomez, Gal Gadot and even a range of YouTubers are easily found through an internet search. These videos range in quality, but the process to make them remains effectively the same. Deep-learning AI, aka “neural networks,” function kind of like the human brain — algorithms work together to identify patterns based on input data, over time learning to classify data with greater precision. 

There have been a number of different algorithmic apps in recent years, ranging from the hard-to-find FakeApp to the now-ubiquitous DeepFaceLab, complete with widely shared tutorials. “The way it works is that first you need a target video. So, let’s say the porn video. And if you want to change the face with a celebrity, well, celebrities are good because there are so many pictures of them online,” Li explains. “So what you would do is Google and download as many as possible and input them into the software. In general, we’re talking in the range of about a thousand images. The more the better.”

The AI identifies facial expressions, angles and lighting, which all factor into how it learns to sort. Ultimately, every frame of the porn performer’s face is replaced with a matching one from the celeb, with the software doing the heavy lifting to smooth rough edges. Moreover, it’s easier than ever for an average person to try their hand at deepfakes, Li says, thanks to the minds who coded programs like DeepFaceLab, which feature “state-of-the-art components” culled from the kind of research Li has conducted in the recent past. It’s not just porn either; there’s a cottage industry of YouTube creators who levy deepfake techniques on clips from film and TV, just for the laughs. 

Li is convinced that everyday consumers will have access to “perfectly real” face-swapping tech within a year — a timeline that Li admits seemed unfathomable not long ago. The bad news is that, despite how quickly deepfake tech has developed, the kind of AI needed to combat it still lags behind. The ideal situation would be to create software that can detect whether a video is fake or not, Li says; no wonder the Defense Department’s advanced DARPA research unit recently posted a request for collaborators on just that. Still, Li has an inkling that the fakes may become technically indistinguishable from reality at some point (“In the near future,” he clarifies), rendering the arms race for deepfake-killing AI a bit moot. 

“So, then, the question is: Can we actually detect and investigate the intention of a video rather than whether or not it’s fake?” he wonders. 

That is the daunting task at hand, and some experts think that human intervention will be essential. “The more you get toward automated use, the more likely you are to have inaccuracies or censorship,” Kate Klonick, a professor at St. John’s University and an expert on platform governance, told the MIT Tech Review. “Defining satire, defining fake news, defining fiction — these are all huge philosophical questions.”

Machines aren’t quite prepared for philosophy, and that’s the dichotomy we face as we near a fascinating war that mashes human wit and AI sophistication into a murky moral mess. Our politics will probably be fine, or at the very least no shittier; why bother with sophisticated video when some shoddily constructed “news” sites can muster the same rabid support from the various corners of the internet? 

What is more concerning is how deepfakes could serve as a weapon in interpersonal battle. Current laws around the U.S. are barely prepared for revenge porn, let alone something as complicated as deepfakes, which could fall through a hole-ridden net of loose privacy and libel laws. Court cases could be infected by clients with fabricated evidence, or fake videos ending up in trustworthy institutional archives. Riana Pfefferkorn, an associate director of cybersecurity at Stanford’s Center for Internet and Society, warns that an unprepared justice system could hit big bumps while struggling to authenticate evidence. “As deepfake technology improves and it becomes harder to tell what’s real, jurors may start questioning the authenticity even of duly admitted evidence, to potentially corrosive effect on the justice system,” she writes

The obvious advice might be to minimize your presence online, to prevent strangers from being able to steal your identity, but that seems like too little, too late. Maybe it’s just time to accept that one day, you might have to reckon with a clip of “you” engaged in a disgusting act. Sure, there are a number of laws being proposed that would stymie or even ban the spread of deepfakes. But if the history of the internet shows anything, it’s that legalities tend to bend and break under the weight of technological cleverness. 

As for what scares Li, he pauses, and pivots when I ask him what happens when, say, a stranger takes fake evidence of your crime to the police. “I’m quite optimistic about this, but… people are going to get immune about watching videos. They will say, ‘Well, this maybe has been manipulated.’ So I try not to worry as much about deepfakes. I think it’s a little bit like the Photoshop effect. It hasn’t ruined our lives,” Li responds. “This obviously will have other consequences, but…” 

He trails off, leaving the specifics for me to figure out. 

Could this ultimate test of our credulity be a good thing? After all, maybe we’ll end up consuming less blindly. But it kinda feels like the opposite — we’ll all just end up seeing what we want in the pixels and the math.