Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Massive Errors Found in Facial Recognition Tech: US Study

Facial recognition systems can produce wildly inaccurate results, especially for non-whites, according to a US government study released Thursday that is likely to raise fresh doubts on deployment of the artificial intelligence technology.

Facial recognition systems can produce wildly inaccurate results, especially for non-whites, according to a US government study released Thursday that is likely to raise fresh doubts on deployment of the artificial intelligence technology.

The study of dozens of facial recognition algorithms showed “false positives” rates for Asian and African American as much as 100 times higher than for whites.

The researchers from the National Institute of Standards and Technology (NIST), a government research center, also found two algorithms assigned the wrong gender to black females almost 35 percent of the time.

The study comes amid widespread deployment of facial recognition for law enforcement, airports, border security, banking, retailing, schools and for personal technology such as unlocking smartphones.

Some activists and researchers have claimed the potential for errors is too great and that mistakes could result in the jailing of innocent people, and that the technology could be used to create databases that may be hacked or inappropriately used.

The NIST study found both “false positives,” in which an individual is mistakenly identified, and “false negatives,” where the algorithm fails to accurately match a face to a specific person in a database.

“A false negative might be merely an inconvenience — you can’t get into your phone, but the issue can usually be remediated by a second attempt,” said lead researcher Patrick Grother.

“But a false positive in a one-to-many search puts an incorrect match on a list of candidates that warrant further scrutiny.”

Advertisement. Scroll to continue reading.

The study found US-developed face recognition systems had higher error rates for Asians, African Americans and Native American groups, with the American Indian demographic showing the highest rates of false positives.

However, some algorithms developed in Asian countries produced similar accuracy rates for matching between Asian and Caucasian faces — which the researchers said suggests these disparities can be corrected.

“These results are an encouraging sign that more diverse training data may produce more equitable outcomes,” Grother said.

Nonetheless, Jay Stanley of the American Civil Liberties Union, which has criticized the deployment of face recognition, said the new study shows the technology is not ready for wide deployment.

“Even government scientists are now confirming that this surveillance technology is flawed and biased,” Stanley said in a statement.

“One false match can lead to missed flights, lengthy interrogations, watchlist placements, tense police encounters, false arrests or worse. But the technology’s flaws are only one concern. Face recognition technology — accurate or not — can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale.”

RelatedSan Francisco Bans Facial Recognition Use by Police

Related: Dismantling the Myths Surrounding Facial Recognition

Written By

AFP 2023

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Bill Dunnion has joined telecommunications giant Mitel as Chief Information Security Officer.

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

More People On The Move

Expert Insights

Related Content

Identity & Access

Zero trust is not a replacement for identity and access management (IAM), but is the extension of IAM principles from people to everyone and...

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

CISO Strategy

Okta is blaming the recent hack of its support system on an employee who logged into a personal Google account on a company-managed laptop.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Compliance

Government agencies in the United States have made progress in the implementation of the DMARC standard in response to a Department of Homeland Security...

Email Security

Many Fortune 500, FTSE 100 and ASX 100 companies have failed to properly implement the DMARC standard, exposing their customers and partners to phishing...