The Best (and worst) Deepfakes Developments in 2020

Article by Hengtee Lim | October 05, 2020

Deepfakes are synthetic media, usually videos, created with deep learning technology. By manipulating images, videos, and voices of real people, a deepfake can portray someone doing things they never did, or saying things they never said.

By feeding a machine learning model thousands of target images, a deepfake algorithm can learn the details of a person’s face. With enough training data, the algorithm can then predict what that person’s face would look like when mimicking the expressions of someone else. A similar process is used for training deepfake algorithms to mimic the accent, intonation, and tone of a person’s voice.


Deepfake Developments in 2020

Vox reported in late June that deepfake technology was used in the making of a documentary. “Welcome to Chechnya,” covers the persecution, unlawful detention, and torture of LGBTQ people in Chechnya. This example of deepfake tech allowed its creators to include video of people while offering them a veil of anonymity.

Wired recently reported that due to Coronavrius lockdowns, shooting video is becoming more difficult, and in some locations, dangerous. This has made it difficult for companies to shoot instructional videos, promotional materials, and other similar footage. As a workaround, some companies are looking into deepfake technology to create training videos for their staff. Similarly, Rosebud AI uses deepfake technology to generate modeling photos, and recently launched a service that puts photographed clothes on realistic looking models.


It was also revealed that Disney is working on deepfake technology to improve quality on the big screen at megapixel resolution. This technology is for when “a character needs to be portrayed at a younger age or when an actor is not available or is perhaps even long deceased.” Although the results aren’t perfect, the Disney model’s level of quality stands out. To put it into perspective, though current videos look best as 256 x 256 pixels, Disney’s model can produce quality deepfake examples up to 1024 x 1024.


The Public Response to Deepfakes

The start of 2020 came with a clear shift in response to deepfake technology, when Facebook announced a ban on manipulated videos and images on their platforms. Facebook said it would remove AI-edited content likely to mislead people, but added the ban doesn’t include parody or satire. Lawmakers, however, are skeptical as to whether the ban goes far enough to address the root problem: the ongoing spread of disinformation.

The speed and ease with which a deepfake can be made and deployed, as shown in this article by Ars Technica, have many worried about misuse in the near future, especially with an election on the horizon for the U.S. Many in America, including military leaders, have also weighed in with worries about the speed and ease with which the tech can be used. These concerns are heightened by the knowledge that deepfake technology is improving and becoming more accessible.

This news is the latest in a series of initiatives to detect and regulate deepfake releases. Exactly how to handle them is an ongoing discussion. Twitter announced in November of 2019 that it intended to draft a deepfakes policy. Facebook released the largest known dataset of deepfakes, which includes more than 100,000 clips produced with 3,426 actors and a range of existing face-swapping techniques. Google, too, has contributed a dataset of deepfakes for the purposes of developing better technology to detect them.

In August of 2020, UCL released a report identifying 20 ways AI technology could facilitate crime in the future. At the top of the list was fake audio and video content, which they consider difficult to detect and stop. Examples of deepfakes for criminal use include discrediting public figures and creating societal distrust of audio and visual evidence. 


Improved Deepfake Detection

However, together with worries surrounding the misuse of the technology, deepfake detection is also improving. Teams at Microsoft Research and Peking University recently proposed Face X-Ray, a tool that recognizes whether a headshot is fake. The tool detects points in a headshot where one image has been blended with another to create a forgery. Though a step in the right direction, the researchers also state their technology cannot detect a wholly synthetic image. This would make it weak against adversarial samples.

In December of 2019, Facebook announced The Deepfake Detection Challenge in conjunction with tech giants like Microsoft and Amazon. The challenge offered financial rewards for the building of technology to detect manipulated media. Facebook announced the results of the challenge in June, after 2,114 participants submitted more than 35,000 detection algorithms. The winning algorithm was able to spot deepfakes with an average accuracy of 65.18%, showing that although the detection technology is improving, it is still a challenging problem.

As pointed out in the video above by known research scientist Károly Zsolnai-Fehér, deepfake detection models can also be used to improve deepfake creation algorithms. This is why although the winning algorithm of the Deepfake Detection Challenge will be released open-source, Facebook is keeping its own developing detection technology secret. For a deeper dive into detection systems and how they work, be sure to check out our dedicated deepfake detection article here.


Combating Deepfake Disinformation with Education

In September, Microsoft announced the release of technologies to combat online disinformation on their official blog. One of these technologies was the Microsoft Video Authenticator, which analyzes photo or video to provide a confidence score as to whether the media is fake. It has performed well on deepfake examples from the above mentioned Deepfake Detection Challenge dataset.

However, while detection technology develops, many researchers still point to improved education as the best mode of defense. They recommend keeping in mind the following questions when viewing video content:

  • Is the video bizarre or exceptional?
  • Is the video quality low or grainy?
  • How short is the video? (30 to 60 seconds long is common for current deepfakes.)
  • Is the content visually or aurally strange? (blurry faces, strange lighting, voices/lips out of sync, etc.)


The Year of Consumerized Deepfakes?

This growing concern around deepfake detection has not stopped its move into social media. In fact, recent technological developments on social media platforms have people talking about the idea of consumerized deepfakes. Code found in mobile apps Douyin and TikTok has revealed technology allowing users to insert their face into videos starring other people. The application is not available to the public, but it’s a prime example of social media platforms utilizing deepfake technology.

Snap, the company behind Snapchat, has also reportedly acquired AI Factory, an image and video recognition start-up. Reports state that Snap used AI Factory’s technology for a new face swapping feature, raising some concerns around the possibility of deepfake usage. These applications of deepfake technology will likely continue as social media apps look to entice new users.

In September of 2020, the app Gradient came under fire for a feature called AI Face. The feature claims to alter face images to show how they would look in “Europe”, “Asia”, “India”, or “Africa”. Experts in AI call these deepfake examples dangerous for the way they stereotype cultures. However, they are also another example of the technology at use for the purpose of social media sharing.    


What’s Next?

The above news articles point to a clear trend: deepfake technology isn’t going away. Its use in the future will range from benign and novel to potentially destructive and damaging. Development and detection is quickly becoming a race as detection methods struggle to keep up with improving technology. The result? Regular consumers of news media will find it more difficult to differentiate the real news from the fake. This makes a strong case for improved education, which means focusing on how to educate, and where to start.

With this in mind, how tech companies like Twitter and Facebook regulate deepfake technology will have a huge impact on how people consume and understand it. 2020 will likely bring more discussion around regulation, and more development for entertainment and social media. This will go hand in hand with improved technology for both implementation and detection. But for a deeper look at the potential threats and countermeasures to deepfake technology, see our comprehensive article here.

Some of the Best Deepfakes on Youtube

Best Vladimir Putin Deepfake


Best Obama Deepfake


Bill Hader and Arnold Schwarzenegger Deepfake

Deepfakes is one of many machine learning trends we expect to see in the news in 2020. For the most up to date information on AI technology and where it’s being used, subscribe to our newsletter for articles delivered straight to your inbox.

Want more AI news? Sign up to the Lionbridge Newsletter
The Author
Hengtee Lim

Hengtee is a writer with the Lionbridge marketing team. An Australian who now calls Tokyo home, you will often find him crafting short stories in cafes and coffee shops around the city.


    Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.