The Facial Recognition AI Timeline of 2020

Article by Hengtee Lim | October 12, 2020

Facial recognition companies have been around since early 2000. But it wasn’t until around 2010 that the technology really began to take off; development speed exploded, more companies entered the market, and implementation spread.

As we enter the second half of 2020, the development of facial recognition technology shows no signs of slowing. This is particularly true in the areas of surveillance, security, and law enforcement. However, as we learn more about the facial recognition companies developing and applying this tech to a growing number of industries, many are calling for further research into the accuracy, privacy, and potential bias of these systems.

Think of the following article as an AI timeline for facial recognition news in 2020. We will update it regularly with the aim of creating a comprehensive look at where the technology is used, its evolving regulations, and what impact it is having across the world.

 

December, 2019

In late December of 2019 a study revealed cracks in the technology of facial recognition. According to the National Institute of Standards and Technology, current facial recognition algorithms are far less accurate at identifying African-American and Asian faces compared to those of Caucasians.

The study tested a total of 189 algorithms from a total of 99 developers, including Intel, Microsoft, and Tencent. It reported that many algorithms falsely identified Asian and African-American faces 10 to 100 times more than Caucasian faces. These results called into question the overall efficacy of facial recognition technology.

The NIST study arrived at a time of increased scrutiny of facial recognition technology and the regulation of it. Democratic lawmakers in the US asked the Department of Housing and Urban Development to review the use of facial recognition in federally assisted housing, a request stemming from concerns around biased technology.

The AI Now research institute also called for a ban on emotion recognition technology in its 2019 report. Stating a lack of scientific grounding, they felt the technology should not be allowed to make decisions that impact human lives. It sited job hiring, student performance, and pain assessment as examples.

 

January, 2020

In January, the New York Times reported on a top facial recognition company called Clearview AI. The report detailed how the start-up company scraped the open web to collect billions of photos of people. It used these photos to create a database from which new photos could be identified in seconds. The report looked at its long client list of law enforcement agencies, and the hazy legality of creating a database of highly personal information to sell.

At the same time, the digital rights advocacy group Fight the Future launched a campaign calling for the banning of facial recognition technology on college campuses. In a written statement, their deputy director said, “This type of invasive technology poses a profound threat to our basic liberties, civil rights, and academic freedom.”

 

February, 2020

In early February, the Metropolitan police (Met) began operational use of facial recognition surveillance, starting with the Stratford Centre retail complex in east London. The Met’s commander stated a false positive rate of one in a thousand, but research by the Big Brother Watch organization showed that during public trials the misidentification rate was 93%.

In the later half of February, reporting by OneZero revealed NEC as another key player among of facial recognition companies. The company worked with clients in the US, UK, Japan, and India, and had been steadily increasing its foothold in the industry over the last ten years.

In other facial recognition news, Buzzfeed reported that Clearview AI’s clients went well beyond law enforcement agencies, and also included ICE, Walmart, and the NBA, among others. It reported that Clearview’s technology was used by more than 2,200 law enforcement departments across 27 countries. This report revealed not only how widespread the technology was, but also called into question claims by the company CEO that Clearview was “strictly for law enforcement.”

Amidst all of this, lawmakers in both the US and Scotland started looking into regulation of the use of this technology by law enforcement. These regulations aimed to ban the technology until formal government guidelines were established, and to prevent the technology from infringing upon privacy and civil rights.

 

March, 2020

As COVID-19 began to spread across the world in the early months of 2020, facial recognition companies were sent scrambling to adapt to the growing number of people in masks, particularly in Asia. However, by early March a Chinese company by the name of Hanvon claimed it had developed a way to identify people even when they were wearing masks. The report stated that the Ministry of Public Security was already using the technology to identify and track people.

Other news outlets later reported that the need to identify people wearing masks had resulted in facial image datasets of masked individuals being sold illegally online. Sellers claimed their data was scraped from public websites and social media, and sometimes included scans of people entering office buildings and neighborhoods.

 

April 2020

In late April it was reported that Russia was also using facial recognition technology to control the spread of the coronavirus. Like China, Russia has been developing a facial recognition system over several years, and is now using it to fight the pandemic. The report detailed how the surveillance systems are used to monitor individuals who have tested positive for the virus, or those who are deemed high-risk.

 

May 2020

At the beginning of May, NBC news reported that Clearview AI was in discussions with federal and state agencies to help with contact tracing of COVID-19. The company CEO said, “We mainly serve government and law enforcement, and we’ve seen more demand for solutions around coronavirus and how to help things like tracing people who may have had the virus.” This news resulted in American senators calling for information on exactly which states and federal agencies were talking with the company. Senator Ed Markey commented, “I’m concerned that if this company becomes involved in our nation’s response to the coronavirus pandemic, its invasive technology will become normalized, and that could spell the end of our ability to move anonymously and freely in public.”

Meanwhile, in California legislators prepared to debate over legislation that would regulate the use of facial recognition. The bill, introduced as Assembly Bill 2261, would allow companies and government agencies to use the technology provided they give prior notice.

 

June 2020

In California, Assembly Bill 2261 failed to pass, effectively stopping companies from using the technology for the immediate future.

The biggest surprise in recent facial recognition news came when IBM’s CEO wrote a letter to congress, stating, “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.”

Although some argued that IBM is no longer a front-running developer of the technology, the announcement was followed by similar actions in the tech industry. Microsoft announced they would halt all sales of facial recognition technology to American police departments, while Amazon put a one year hold on the technology to allow governments to establish rules for regulation.

It was also revealed in June that the American Civil Liberties Union (ACLU) had filed a formal complaint against the Detroit police over a wrongful arrest directly related to facial recognition technology. The report said the police arrested an African American man after their facial recognition system falsely matched his photo with security footage of a shoplifter. He was arrested on his front lawn and spent 30 hours in custody. Later, the Detroit Police Chief admitted to a 96% error rate in their surveillance technology.

The day after the above report, Lawmakers in the United States introduced legislation to ban the use of facial recognition technologies by all federal agencies without explicit authorization from Congress. The legislation still has yet to pass, but would strongly limit how the technology is used for surveillance.

In mid-June, Boston’s city council voted unanimously to ban the use of facial recognition technology. This also prohibits city officials from obtaining facial surveillance from third parties, and makes Boston the second-largest city to ban the technology, behind San Francisco.

 

July 2020

As of July, the facial recognition company Clearview AI will no longer offer their technology in Canada. This is the result of an investigation by Canada’s Privacy Commissioner into both Clearview AI and the Royal Canadian Mounted Police, which was Clearview’s last client in the country. Other investigations into the company are still ongoing.

Portland also unveiled new legislation that would ban facial recognition in privately owned businesses and publicly accessible spaces. The legislation is not a total ban, however, and could still be legally used at airports, places of worship, and public schools. The vote for the legislation is set to take place on August 13.

A report by the National Institutes of Science and Technology (NIST) showed that as many as 89 commercial facial recognition algorithms struggled on faces partially covered by protective masks. This brought to light the fact that as the COVID-19 pandemic brings changes to how people live, it is imperative for facial recognition companies to evolve with these changes.

It was also reported that big-tech companies Amazon, Google parent company Alphabet, and Microsoft were all being sued for using people’s pictures without their consent, which violates a privacy statue in Illinois. These pictures were used to train their facial recognition technology. Near the end of the month it was also revealed that Facebook would pay a $650 million settlement for the same violation. 

 

August 2020

Portland, Maine, became the 13th city in the US to ban facial recognition technology. The blanket ban prohibits city official and local police from using facial recognition technology. This came amidst a report by the US Government Accountability Office that businesses using facial recognition technology to monitor the spread of COVID-19 are increasing, but that accuracy and privacy are still concerns.

 

September 2020

IBM wrote to the US Commerce Department saying it should control the export of facial recognition systems to repressive regimes that may use the tech to commit human rights violations. The commerce department is still considering whether or not to do so.

The Department of Homeland Security also acknowledged in September that data from a facial recognition pilot program was hacked and leaked on the dark web in 2019. The data included 184,000 images, of which at least 19 made it onto the dark web. 

 

Directions in the second half of 2020

Though there is a trend towards increased regulation on facial recognition technology, research and implementation continues worldwide. To put the growth into perspective, Global Market Insights, Inc. recently released a report forecasting the valuation of facial recognition at $12 billion by 2026.

It’s important to recognize also that the use and perception of facial recognition companies and technology is different across the world. How it develops will depend on factors including what enterprises are researching and developing the technology, the extent of government interest and use, and the capacity in which it is used.

We’ll endeavor to keep this post updated with facial recognition news as it’s reported. You can keep up to date by subscribing to our newsletter. And if you’d like more insight into current AI technologies and their impact on the world, be sure to check out the related articles below.

Subscribe to our newsletter for for more news and interviews:
The Author
Hengtee Lim

Hengtee is a writer with the Lionbridge marketing team. An Australian who now calls Tokyo home, you will often find him crafting short stories in cafes and coffee shops around the city.

    Welcome!

    Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.