Deepfake is a term that’s been rising to infamy over the past year or so, but it’s not a new concept. Deepfakes (a combination of “deep learning” and “fake”) are fake videos or audio recordings that look and sound just like the real thing.¹ It’s been used in movies for decades- usually with expensive software and million dollar budgets. When actors die mid-filming or a current film is set in the past, deepfakes are useful in creating a believable scene. For example, in Star Wars: Rogue One, they brought back the character Grand Moff Tarkin from the original Star Wars. It’s not uncommon for actors to have a cameo in sequels, except Tarkin’s actor, Peter Cushing died in 1994. With some movie magic, and a stand-in actor, they were able to digitally recreate Grand Moff Tarkin for the 2016 film. This may be a cool concept for movie-goers and cinema nerds, but what happens if someone uses those same technologies to recreate you?
“Deepfake video is created by using two competing AI systems — one is called the generator and the other is called the discriminator. Basically, the generator creates a fake video clip and then asks the discriminator to determine whether the clip is real or fake. Each time the discriminator accurately identifies a video clip as being fake, it gives the generator a clue about what not to do when creating the next clip.”⁶
These videos also use generative adversarial networks (GANs) to create believable alternate videos.² As this technique becomes more popular, it gets easier and cheaper to create, and if it gets into the wrong hands, the results could be devastating. Actresses who have enemies have their face pasted onto adult film stars, or filmed saying things they never said, in attempt to ruin their career. Recently, Reddit user deepfakes pasted the faces of celebrities like Scarlett Johansson, Taylor Swift, and Maisie Williams onto X-rated GIFS. The danger if these tools become widespread is self-evident.³ A machine designed to create realistic fakes is a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections.⁴
So back to you- what happens if an ex or an angry coworker gets their hands on this software and blackmails you? How do you prove it’s not real? It looks like you and sounds like you, so it must be you. Humans rely heavily on video and audio as proof.
Currently, you may get lucky because the low budget deepfakes have many flaws and can be diffused with a little patience and a keen eye. One way researchers like Siwei Lyu,have tried to debunk fake videos is to monitor the eyes in the video for blinking.⁵ Deepfakes use deep learning to read hundreds and thousands of images, and how many pictures do you or anyone have with your eyes closed? It’s been proven that deepfakes don’t create a believable blink or don’t blink at all, because there are minimal closed-eyed photos to replicate, which is a dead giveaway.
Jennifer Lawrence’s speech as Steve Buschemi
In the near future it may not be enough to debunk via blinking. It seems as though with each technological advancement in detecting fake videos, the advancement of creating more undetectable videos increases. Future investigations regarding them may require law enforcement or the FBI if the case is serious enough. I wouldn’t go total dystopia mode though, thinking that nothing is real and everyone’s out to get you. There’s always a workaround or solution, especially in technology.
Deepfakes can be one way for a hacker to get in, check out these other five, and make sure your company doesn’t fall for these traps.