Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Identity & Access

The Art Exhibition That Fools Facial Recognition Systems

The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.

The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.

The exhibition is the brainchild of Adversa, a startup company designed to find and help mitigate against the inevitable and exploitable insecurities in artificial intelligence. In this instance, the weaknesses in facial recognition systems are highlighted.

The exhibition is predicated on the concept of an NFT sale. Security professionals who might dismiss NFTs as popular contemporary gimmickry should not be put off – the concept is used merely to attract a wider public audience to the insecurity of facial recognition. The purpose of the exhibition is altogether more serious than NFTs.

The exhibition has 100 Mona Lisa images. “All look almost the same as the original one by da Vinci for people, though AI recognizes them as 100 different celebrities,” explains Adversa in a blog report. “Such perception differences are caused by the biases and security vulnerabilities of AI called adversarial examples, that can potentially be used by cybercriminals to hack facial recognition systems, autonomous cars, medical imaging, financial algorithms – or in fact any other AI technology.”

How AI Fools Facial Recognition

This demonstration of tricking AI is based on a library of 8,631 different celebrities taken from published photographs. The facial recognition model is FaceNet, trained on the most popular facial recognition dataset, VGGFace2. 

VGGFace2 is described as a dataset for recognizing faces across pose and age. It comprises more than 3 million images divided into more than 9,000 classes – making the dataset a popular choice for training deep learning models for facial recognition. 

FaceNet is Google’s face detection model. A June 2021 report by Analytics Vidhya noted, “We checked 4 deep learning models namely, FaceNet (Google), DeepFace (Facebook), VGGFace (Oxford), and OpenFace (CMU). Out of these 4 models FaceNet was giving us the best result. In general, FaceNet gives better results than all the other 3 models.”

This is no fabricated demonstration using purpose-designed techniques – it is genuinely representative of facial recognition AI in the real world. When viewing the exhibition, it is worth noting that not one of the images is the original Mona Lisa image – all are manipulated differently to make AI recognize a different celebrity while remaining as Mona Lisa to the human eye.

Advertisement. Scroll to continue reading.

“In order for the classifier to recognize a stranger, a special pattern called an adversarial patch can be added to a photograph of a person,” explains Adversa. “This patch is generated by a special algorithm that picks up the pixel values in the photo to make the classifier produce the desired value. In our case, the picture makes the face recognition model see some famous person instead of Mona Lisa.”

To fool facial recognition systems in the real world, the attacker needs to get a doctored self-image into the facial recognition database. This could be done by hacking the database, or by socially engineering the photo acceptance procedure. As a new employee in a sensitive establishment that uses facial recognition in-house, technically a bad new employer could engineer his own image to be interpreted as the CEO. If he succeeds, all doors will be open since nothing is closed to the CEO.

Adversa’s purpose in this Mona Lisa exhibition is not simply to demonstrate that facial recognition AI can be subverted, but to make the wider point that AI itself can be subverted for malicious purposes. The big takeaway is that just because your AI system says that black is black, that may not be true.

As far as the exhibition is concerned, each Mona Lisa image can be tracked to the celebrity photo that facial recognition – as opposed to the human eye – will ‘see’.

Related: Cyber Insights 2022: Adversarial AI

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

Related: IRS to End Use of Facial Recognition to Identify Taxpayers

Related: Italy Fines US Facial Recognition Firm

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

Professional services company Slalom has appointed Christopher Burger as its first CISO.

More People On The Move

Expert Insights

Related Content

Identity & Access

Zero trust is not a replacement for identity and access management (IAM), but is the extension of IAM principles from people to everyone and...

CISO Strategy

Okta is blaming the recent hack of its support system on an employee who logged into a personal Google account on a company-managed laptop.

Compliance

Government agencies in the United States have made progress in the implementation of the DMARC standard in response to a Department of Homeland Security...

Email Security

Many Fortune 500, FTSE 100 and ASX 100 companies have failed to properly implement the DMARC standard, exposing their customers and partners to phishing...

Funding/M&A

The private equity firm merges the newly acquired ForgeRock with Ping Identity, combining two of the biggest names in enterprise IAM market.

Application Security

Virtualization technology giant VMware on Tuesday shipped urgent updates to fix a trio of security problems in multiple software products, including a virtual machine...

Application Security

Fortinet on Monday issued an emergency patch to cover a severe vulnerability in its FortiOS SSL-VPN product, warning that hackers have already exploited the...

Identity & Access

Hackers rarely hack in anymore. They log in using stolen, weak, default, or otherwise compromised credentials. That’s why it’s so critical to break the...