Is Modern Facial Recognition Biased?

Article by Hengtee Lim | January 20, 2020

In late December of 2019 a bombshell study revealed cracks in the technology of facial recognition. According to the National Institute of Standards and Technology, current facial recognition algorithms are far less accurate at identifying African-American and Asian faces compared to those of Caucasians.

The study tested a total of 189 algorithms from a total of 99 developers, including Intel, Microsoft, and Tencent. It reported that many algorithms falsely identified Asian and African-American faces 10 to 100 times more than Caucasian faces. These results call into question the overall efficacy of facial recognition technology, which is seeing increased use in surveillance, security, and law enforcement worldwide.

Increased Scrutiny

The NIST study arrives at a time of increased scrutiny of facial recognition technology, its role, and the regulation of it. For example, in December democratic lawmakers in the US asked the Department of Housing and Urban Development to review the use of facial recognition in federally assisted housing. Their request stems from concerns around biased technology. More recently, Fight the Future is calling for the technology to be banned on college campuses.

In related news, The AI Now research institute has called for a ban on emotion recognition technology in its 2019 report. Stating a lack of scientific grounding, they feel the technology – which analyzes expressions to identify emotions – should not be allowed to make decisions that impact human lives, such as job hiring, student performance, and pain assessment.

While tech giants including Microsoft and Amazon have called for increased oversight and regulation, the increased use of facial recognition in law enforcement and surveillance have many critics worried about privacy, bias, and responsibility going into 2020.

Positive Developments in Facial Recognition

On the flip side, facial recognition is also being implemented for positive effect in a variety of areas. A recent Time article introduced Civil War Photo Sleuth, a website that uses facial recognition to identify soldiers in the civil war. This human-assisted technology is a fantastic example of machine learning algorithms supporting the understanding, verification, and discovery of historical materials.

Elsewhere, research and development by Bosch is looking to implement facial recognition systems in vehicles that scan drivers’ faces to alert them when they are not paying attention to the road. The system is capable of issuing warnings when a driver appears sleepy, or when they take their eyes off the road for a set period of time.


Directions in 2020?

As facial recognition plays a bigger role in everyday life, and as we come to better understand it, it’s natural for our level of scrutiny to increase. So expect further calls for regulation in 2020, along with more research and deeper discussions on the topic of bias and fairness. However, as the technology is often implemented faster than the ability to regulate it, we can also expect more cases of bias, too.

Ultimately however, this increased scrutiny is a positive, and it’s also necessary. As we design and develop new facial recognition systems, it’s important to look at them closely; their effect on our society and how we function within them will be far-reaching. As we build the technologies that shape our future, it’s important to consider their long-term effects.

For a look at exactly how facial recognition technology works and what it’s used for, see our in-depth primer here.

Want more AI news? Sign up to the Lionbridge Newsletter
The Author
Hengtee Lim

Hengtee is a writer with the Lionbridge marketing team. An Australian who now calls Tokyo home, you will often find him crafting short stories in cafes and coffee shops around the city.


    Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.