Three Types of Deepfake Detection

Article by Bharath Raj | November 18, 2020

Due to advancements in machine learning, it’s now possible to create high quality deepfakes with minimal effort. So it’s no surprise that deepfakes used with malicious intent can cause great harm. For instance, the technology can be used to create large amounts of misinformation, damage reputations, and harm individuals. This makes deepfake detection more and more important. 

Deepfakes are images or videos that are synthetically manipulated or generated using deep learning. The term is growing in popularity, but an exact definition is difficult to pin down. It is sometimes used for “face swaps,” “expression/attribute manipulation,” and even for images entirely synthesized by a deep learning algorithm. In this article, I’ll use the term deepfake to refer to both media manipulation and media generation through deep learning.

Though it’s possible for humans to detect deepfakes, it’s getting harder as the general quality increases. Moreover, as it gets easier and faster to create high quality deepfakes, the volume of deepfake content may be too great for human detection alone. That’s why we may need to rely on algorithms to determine whether a piece of content is a deepfake or not.

Fortunately, deepfake detection research is ongoing, and offers numerous potential benefits. Not only are algorithms automatic, they can also potentially detect cues that are hard for humans to find on their own.

In this article, I’ll briefly explore a few techniques for detecting deepfakes. I’ll also look at the broad challenges that face deepfake detection techniques both at present and in the future.

 

Deepfake Detection Techniques

In this section I’ll explore a few methods for detecting deepfakes. To get an idea of the various detection techniques available, I referred to DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection by Ruben Tolosana et al. Please take a look at the survey if you want to explore the techniques further.

In this article, I’ve organized deepfake detection methods into the following three broad categories:

  • Hand-crafted Features
  • Learning Based Features
  • Artifacts

I have placed methods in their respective categories based on what I felt best described them, so please keep in mind that some methods could fall into multiple categories. For the sake of brevity, I’ve limited each category to a few key methods, and have kept discussion of performance to a minimum. I’ll provide links for readers who want to learn more about the contents in greater detail.

Without further ado, let’s look at our deepfake detection techniques.

 

Hand-crafted Features

While deepfake techniques create realistic looking content, there are often obvious flaws that a human or algorithm can spot upon closer inspection. One example is unnatural facial features. In this category, we’ll explore a couple of techniques that extract and engineer specific features to perform deepfake detection.

Shruti Agarwal et al. propose a method involving a novelty detection model (here, a one-class SVM) that can distinguish a person of interest (POI) from other individuals as well as deepfake impersonators. This methodology is quite interesting as we only need authentic videos of the POI to train the model.

The authors’ hypothesis is that as an individual speaks, they have distinct (but probably not unique) facial expressions and movements. The researchers use the OpenFace2 toolkit to extract facial and head movements in a given video. By collecting facial action units, head rotations about certain axes and 3D distances between certain mouth landmarks, they obtain 20 facial/head features for a given 10 second video clip.

Here we see a visualization for detecting deepfakes by clustering features
Source: https://openaccess.thecvf.com/content_CVPRW_2019/papers/Media%20Forensics/Agarwal_Protecting_World_Leaders_Against_Deep_Fakes_CVPRW_2019_paper.pdf)

 

Finally, by computing the Pearson correlation between the above 20 facial/head features, they get a 190 dimensional feature vector representing the 10 second clip. Once the 190-D feature vector is extracted, a one-class support vector machine (SVM) is used to determine whether the 10 second video clip is a real video of the POI.

One other example of using hand-crafted features for deepfake detection is by Tackhyun Jung et al. The authors propose a framework to analyze a person’s blinking patterns to detect deepfakes in video. They posit that blinking patterns are known to change based on physical conditions, cognitive activities, biological factors and so on. Using a collection of algorithms, the authors extract the facial region and calculate the Eye Aspect Ratio (EAR) for each frame in the video. The EAR value for a closed eye is typically smaller than the EAR value for an open eye (for the exact framework formulation, please refer to the paper). By setting an appropriate threshold, one can detect blinking events based on the EAR value and analyze a person’s blinking patterns in video.

One method of deepfake detection is through tracking blinking patterns in video.
Source: https://ieeexplore.ieee.org/document/9072088

As part of this framework, four attributes (gender, age, activity and time) were taken as input to describe the person in the video. Based on these attributes, the authors query a pre-configured database with typical blinking pattern data. Then they compare the measured blinking pattern data with the blinking pattern data queried from the database. Based on this they can ascertain whether a video is a deepfake.

You can check out the survey paper I mentioned earlier in the text and the related work section of the GANprintR paper to learn about more hand crafted feature-based detection techniques.

 

Learning Based Features

In this category we’ll explore learning based methods for detecting deepfakes. Many of these methods use convolutional neural networks (CNNs) to learn the features necessary for deepfake detection.

Andreas Rössler et al. in the FaceForensics++ paper evaluate several learning based methods and one hand-crafted feature extraction method on their ability to detect forgery at different video quality levels. They pre-extract the facial region from the input image (and extract a center-crop or resize for certain methods) and use the various methods to detect forgery. They noticed that all approaches achieved high accuracy on raw input data. However, performance dropped for compressed videos. Among the tested methods, the XceptionNet model achieved the highest performance across all video quality levels.

This shows a number of architectures used for deepfake detection, and their accuracy on a variety of input data.
Source: https://arxiv.org/pdf/1901.08971.pdf

 

Some deepfake creation methods can’t generate videos that are temporally consistent. Artifacts that arise due to this inconsistency may provide good cues for video based deepfake detection.

One way to incorporate temporal information along with spatial information for training a model is by using 3D CNNs. On that note, Yaohui Wang et al. analyze the ability of a few 3D CNNs (specifically I3D, 3D ResNet and 3D ResNeXt) to detect manipulated video. You can refer to their paper for more details on their various experiments.

Irene Amerini et al. suggest using optical flow to detect deepfakes. The general idea is that deepfake videos may have discrepancies in motion across frames (such as unusual movements of facial parts) which optical flow can capture. For a given frame f(t), they use the PWC-Net model to estimate the forward optical flow OF(f(t), f(t+1)) which depicts the apparent motion of various elements in the scene. The extracted optical flow values are transformed to an RGB image format and is used by their Flow-CNN to detect deepfakes.

 

Artifacts

At present, deepfake creation methods are not perfect. This means there is often evidence which we can analyze to deduce whether a piece of media was manipulated and/or is a deepfake. In this category, we will explore some detection methods that look for this evidence, often called artifacts.

Some forgery detection methods work by observing the Photo Response Non Uniformity (PRNU) pattern on images. The authors of the paper Noiseprint mention that in the seminal paper by Lukas et al. they observe that each individual device leaves a specific mark on all acquired images (the PRNU pattern) due to imperfections in the device manufacturing process. The authors note that the lack of PRNU may indicate manipulation. Needless to say, PRNU-based methods can be used to detect deepfakes. However these methods do have some drawbacks, one of which is you need a large number of images to make good estimates.

In another paper, Francesco Marra et al. show that each GAN leaves a specific “fingerprint” in its generated images, similar to real-world cameras marking images with traces of their PRNU. The authors present an experiment showing evidence of GAN fingerprints.

For an image Xi generated by a given GAN, they mention that the fingerprint represents a disturbance unrelated with the image semantics. To obtain the fingerprint, they first calculate the noise residual Ri using Ri = Xi – f(Xi), where the function f is a denoising filter. They assume that this residual is a sum of a non-zero deterministic component (the fingerprint, F) and a random noise component (Wi). Hence, Ri = F + Wi.

The additive noise component mostly gets cancelled out on taking the average of residuals computed from many different images generated by the same GAN. Hence, by performing the above operation, we get an estimate of the GAN fingerprint.

A look at GAN fingerprints as they relate to deepfake detection
Source: https://arxiv.org/pdf/1812.11842.pdf

 

In one of their experiments, the authors explored the ability of the fingerprints to tell apart images of different origin. They considered two different GAN architectures:

  • GAN A: A Cycle-GAN trained to convert orange images to apple images.
  • GAN B: A Progressive-GAN trained to generate kitchen images.

They suggest that, based on image-to-fingerprint correlation or similar indicators, meaningful fingerprints should allow one to decide which of the two GANs generated a given image. It was observed that:

  • Cross-GAN correlations (correlation between the residual of an image created by one GAN and the fingerprint of a different GAN) are evenly distributed around zero, indicating no correlation between generated images and the unrelated fingerprint.
  • Same-GAN correlations (correlation between the residual of an image created by one GAN and the fingerprint of the same GAN) are markedly larger than zero, testifying of a significant correlation with the correct fingerprint.

This result is interesting because we can potentially use it for detecting deepfakes, and even identifying the source of a deepfake. However, the authors mention that more research is necessary to assess the characteristics, usability and robustness of these fingerprints.

The work by Ning Yu et al. presents another elaborate investigation into learning and analyzing GAN fingerprints. One of their interesting claims is that even minor differences in GAN training could result in different fingerprints, which enables fine-grained model authentication. Overall, their study provides a good insight into the viability and capability of GAN fingerprints.

 

Challenges Facing Deepfake Detection

There are several challenges limiting the accuracy levels of deepfake detection methods. Let’s look at some below:

 

Degradation of Quality

Media that is shared over the internet may experience a loss in quality due to data compression, resizing, noise, and so on. This might pose problems to some deepfake detection algorithms.

As mentioned earlier, the paper FaceForensics++ compares the performance of a few forgery detection methods at different video quality levels. They noticed that while the methods had high accuracy on raw input data, performance dropped on compressed video.

It’s very important to develop deepfake detection methods that are robust to degradation of quality and content. Media items like deepfakes and forged evidence will likely have been circulated widely online in a variety of different quality levels.

 

Artifact and Fingerprint Removal

It is possible that the artifacts or “fingerprint” information used for detection can be removed to trick deepfake detectors. For instance, the paper GANprintR introduces a simple auto-encoder-based method of removing the GAN fingerprint from synthetic fake images. This tricks facial manipulation detection systems while keeping the visual quality of the image. Strategies like this are a threat to the accuracy of certain deepfake detection methods.

Moreover, if deepfake detection methods focus on very specific traits (such as the abnormal blinking rate in a video), it’s likely people will attempt to develop improved deepfake creation methods that do not have those traits.

 

Poor Generalization and Evolution of Deepfake Technology

Put simply, some deepfake detection strategies may not generalize well to content and/or manipulation techniques that differ from their training domain.

In one of their experiments (cross-manipulation techniques), Yaohui Wang et al. trained their 3D CNN detectors on real videos and fake videos created using three manipulation techniques. When they tested the trained model to detect deepfakes created by a technique that the model was not trained on, they noticed a drop in performance (as compared to their other experiments).

The ability to generalize to unknown domains is quite important in deepfake detection. For one, when applying deepfake detection in the real world, we most likely won’t know the true source of the content and the type of manipulation applied to a particular scenario, such as detecting disinformation. Also, as newer deepfake creation methods are developed, poorly generalizing strategies will need to be constantly updated to cover these new techniques.

Some strategies to detect deepfakes may be able to generalize to unknown domains more easily than others. It is important to identify such strategies and create detection methods that can adapt to unconstrained settings.

 

Conclusion

In this article I looked at three categories of deepfake detection techniques, and explored some general challenges we may face in the future. I hope this article gives you a general idea of currently developing deepfake detection techniques. The ideas I present here are far from exhaustive, and I encourage you to dive in yourself if you want to learn more about current techniques and challenges involved in deepfake detection.


 

If you’d like to learn more about deepfake developments, be sure to check our dedicated article here. For more of Bharath’s technical articles, see the the related resources below and sign up to the Lionbridge AI newsletter for interviews and articles delivered directly to your inbox.

Subscribe to our newsletter for more technical articles
The Author
Bharath Raj

Bharath is a curious learner who explores the depths of Computer Vision and Machine Learning. He often writes code to build hobby projects and jots down his findings and musings as blogs. You can check out more of his work on Medium and GitHub @thatbrguy.

    Welcome!

    Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.