Deepfakes: A Threat to Individuals and National Security

Article by Limarc Ambalina | June 21, 2019

The U.S. House of Representatives Intelligence Committee held an open hearing on Thursday, June 13th, 2019 to discuss the national security threats posed by artificial intelligence.

Specifically, the hearing focused on the recent rise of manipulated media created using deepfake AI technology. The witnesses for the hearing all held high positions in law, security, or AI development and research. While the experts all had different thoughts, there was at least one consensus throughout the board: deepfakes pose a real threat to various levels of society.

 

What are Deepfakes?

A combination of the terms “deep learning” and “fake”, deepfakes are synthetic media created using deep learning technology. Synthesizing images, video, or audio, may have no obvious nefarious purposes. However, manipulating media using the images, videos or voices of real people pose moral and legal concerns.

Deepfakes can portray people performing actions they never did or saying things they never said. Basically, by feeding the models hundreds to thousands of target images, deepfake algorithms learn what someone’s face looks like from multiple angles and in a variety of expressions. Through training, the algorithm can then predict what the target individual’s face would look like mimicking the facial expressions of someone else. A similar process is used for training a deepfake algorithm to mimic the accent, intonation, and tone of someone’s voice.

Furthermore, the technology requirements to run deepfake algorithms are low. Any motivated individual with a mid to high-grade consumer PC and enough storage space can create deepfakes. To learn more about deepfake algorithms, check out the open source programs DeepFaceLab and FaceSwap which are both available on Github.

 

Potential Dangers of Deepfakes

From threats to everyday citizens to organized crime and national security issues, deepfakes could affect every level of society.

 

Individuals

Firstly, there are threats of abuse at the individual level. Deepfakes can be used for cyberbullying, defamation and blackmail. In 2017, deepfake celebrity pornography–with faces of celebrities superimposed onto adult actresses–became a huge trend and were shared on Reddit and various pornography sites. “The more salacious something is, the more people want to click or share,” said Danielle Citron (Professor of Law). If someone shared a pornographic image or video of you with your friends and family, it wouldn’t matter if it was verified to be fake after the fact; the damage would already be done.

This exact nightmare happened to prominent Indian journalist, Rana Ayyub, as Citron explained at the hearing. In 2018, a pornographic video using Ayyub’s face was shared over 40,000 times on social media platforms. Ayyub was forced to delete some of her social media accounts and stay offline, due to an influx of abusive messages. After the video was shared, an anonymous user found Ayyub’s home address and posted it online, which resulted in her receiving multiple rape threats. The stress from the incident sent her to the hospital with panic attacks and rendered her unable to work.

Organized Crime

Deepfakes can be a goldmine for criminal organizations and virtual scams. A possibly even larger concern than image and video manipulation is the ability of deepfake technology to mimic someone’s accent, intonation, and speech patterns with incredible accuracy. AI company, Dessa, created a deepfake audio clip depicting comedian and UFC commentator, Joe Rogan, making outlandish statements.

While many people are familiar with photoshop and the ability to alter images, deepfake technology, especially voice mimicry, is unknown to almost everyone outside of the data science and machine learning fields. Manipulated audio or synthesized speech can be used in virtual kidnapping scams where victims are often targeted through social media and receive phone calls demanding ransom for a loved one. These scams often include an actor in the background who screams for help. Victims who believe they hear their loved one on the other end of the line transfer ransom money often before contacting the police. In a digital age where many parents share videos of their children online, this scam could easily be accelerated using deepfake audio.

In the corporate sector, deepfakes can be used as a more heinous form of black-hat marketing. Companies could create deepfake videos of a rival company’s CEO making defamous or offensive statements and leak the videos via social media. Corporate sabotage with deepfakes could easily be used to manipulate the stock market.

National Security

The biggest concern posed by guest speaker Clint Watts (researcher at the Foreign Policy Research Institute) was the usage of deepfakes to cause widespread civil unrest. Watts specifically voiced worries about China, whose AI capabilities rival the United States, and Russia. In a sort of modern version of war propaganda posters, foreign adversaries could use deepfakes to incite fear inside of Western democracies and distort the reality of American audiences.

As an example of the potential dangers of deepfakes, comedian Jordan Peele teamed up with Buzzfeed to make a deepfake video of former President Barack Obama.

The problem, Watts explained, is that as media technology grows, audiences are more deeply engaged. Coupled with high engagement, it is incredibly difficult to refute a falsified video after it has been watched. The two clearest dangers Watts emphasized are the targeting of U.S. officials and democratic processes, and the incitement of physical mobilization under false pretenses.

David Doermann backed up Watts’ claims with his experience as the previous head of Media Forensics (MediFor) at the Defense Advanced Research Projects Agency (DARPA). MediFor aimed to analyze manipulated media created by U.S. adversaries. However, due to the unprecedented advancement of deepfake technology, the efforts of DARPA could not keep up with the issue of manipulated content at scale. Doermann described the issue at hand as something that will “likely get much worse before it gets better.”

 

Solutions and Countermeasures

The dangers of deepfakes are serious, but OpenAI policy director Jack Clark emphasized that misinformation is not a new problem and fake media is not a new issue. AI itself is not the problem, but just an “accelerant to an issue that’s been with us for some time.” Deepfake technology is merely a tool which has more positive applications than negative. Certain actions and precautions should be taken to minimize the damage done by those who use deepfakes with nefarious intent.

 

Awareness and Responsibility

“The people that share this stuff are part of the problem,” said Doermann. Individuals, social media platforms, and the press should all have the tools readily available to quickly and easily test media they suspect to be fake. Policing content should be put it in the hands of individuals rather than the government. Individuals should be able to identify immediately whether or not something they are viewing or sharing is authentic.

Doermann also called for social media sites to be pressured to moderate their content more seriously. Websites and platforms where harmful manipulated media is shared should hold more responsibility and accountability. Not all deepfakes are nefarious, but at the very least, social media sites should more diligently label synthetic media, increase public awareness of such material and allow the public to make better decisions.

Law

One crucial step that should be taken is the implementation of statutes prohibiting certain deepfake content. Watts pushed for legislation prohibiting United States officials and agencies from creating and distributing such content. Citron explained that for individuals, not only is it expensive to sue, but it is also difficult and sometimes impossible. The accused would not only have to be found but also located within the prosecuting jurisdiction of the United States.

Citron is currently writing a model statute that would address wrongful impersonation and cover some deepfake content.

Detection and Verification Technology

Before we can combat malicious deepfakes, we first have to learn how to find them. While human analyzation of content is necessary, Doermann called for the creation of automated tools to detect deepfake content at scale. Automated detection can stop deepfakes from being published, instead of reacting to such content after the fact.

On top of detection, Watts called for the development of verification methods which could determine the date, time, and physical origin of deepfake content.

Social Media

Strengthening public awareness, developing detection technology, and implementing new laws is essential. However, governments must also be vigilant in protecting freedom of speech. Congressman Himes expressed worry about some of the suggestions posed at the hearing and their potential threat to first amendment rights. Including the labeling of synthetic media, heightened content moderation, social media posting delays and pressuring platforms to censor content, the suggestions given by the guest speakers have great potential to suppress harmful deepfakes.

However, as Congressman Stewart explained, those suggestions were great in theory, “but some are nearly impossible to implement.” Stewart went on to say that the greatest threat facing the United States is that “no one knows what is true anymore.” It is impossible to control everyone, especially while maintaining a free and democratic state.

The very nature of social media platforms is freedom and instantaneity. The ability to share an event to millions of people as it unfolds is what draws people to platforms like Facebook and Twitter. While adding posting delays to allow for content analysis sounds like a good idea, losing that instantaneity, even by just a few minutes, would mean losing the essence of social media as a whole. Therefore, it is doubtful that social media giants would agree to such a drastic change in their platforms.

 

The Future of Deepfakes

Entering an online world filled with deepfakes is like entering a dark room. The darkness is scary because you don’t know what’s there. Likewise, fear of technology and what it could do is a fear that’s been prominent since Y2K. But do we shut the door of a dark room, lock it and chain it, or do we turn on the lights and see if there’s something there that could benefit us?

Congressman Stewart asked whether deepfake algorithms should somehow be removed from public access. Clark and Doermann responded with a resounding no. Aside from the fact that the technology is already on countless computers worldwide, the more important point is that people require the awareness and tools to make educated choices. “Science runs on openness and we need to preserve that,” said Clark. To end, I’d like to reiterate a point made in Clark’s testimony. AI itself is not the problem, but just an “accelerant to an issue that’s been with us for some time.” Instead of locking the dark room, we should give everyone a flashlight to navigate it safely.

 

Sources:

FBI Notice on Virtual Kidnapping Scam

The House Intelligence Committee Hearing on Deepfakes and AI

 

We’re here to help your projects be a leading success. Let's discuss your AI strategy.
The Author
Limarc Ambalina

Limarc writes content for Lionbridge’s website as part of the marketing team. Born and raised in Canada, Limarc’s love of Japanese pop culture brought him to Japan in 2016 and living in Japan has been his dream come true. Apart from Lionbridge content, you can catch Limarc online writing about anime, video games, and other nerd culture.

Welcome!

Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.