Virtual Event Today: Supply Chain Security Summit - Register Now

Security Experts:

Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Identity & Access

The Art Exhibition That Fools Facial Recognition Systems

The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.

The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.

The exhibition is the brainchild of Adversa, a startup company designed to find and help mitigate against the inevitable and exploitable insecurities in artificial intelligence. In this instance, the weaknesses in facial recognition systems are highlighted.

The exhibition is predicated on the concept of an NFT sale. Security professionals who might dismiss NFTs as popular contemporary gimmickry should not be put off – the concept is used merely to attract a wider public audience to the insecurity of facial recognition. The purpose of the exhibition is altogether more serious than NFTs.

The exhibition has 100 Mona Lisa images. “All look almost the same as the original one by da Vinci for people, though AI recognizes them as 100 different celebrities,” explains Adversa in a blog report. “Such perception differences are caused by the biases and security vulnerabilities of AI called adversarial examples, that can potentially be used by cybercriminals to hack facial recognition systems, autonomous cars, medical imaging, financial algorithms – or in fact any other AI technology.”

How AI Fools Facial Recognition

This demonstration of tricking AI is based on a library of 8,631 different celebrities taken from published photographs. The facial recognition model is FaceNet, trained on the most popular facial recognition dataset, VGGFace2. 

VGGFace2 is described as a dataset for recognizing faces across pose and age. It comprises more than 3 million images divided into more than 9,000 classes – making the dataset a popular choice for training deep learning models for facial recognition. 

FaceNet is Google’s face detection model. A June 2021 report by Analytics Vidhya noted, “We checked 4 deep learning models namely, FaceNet (Google), DeepFace (Facebook), VGGFace (Oxford), and OpenFace (CMU). Out of these 4 models FaceNet was giving us the best result. In general, FaceNet gives better results than all the other 3 models.”

This is no fabricated demonstration using purpose-designed techniques – it is genuinely representative of facial recognition AI in the real world. When viewing the exhibition, it is worth noting that not one of the images is the original Mona Lisa image – all are manipulated differently to make AI recognize a different celebrity while remaining as Mona Lisa to the human eye.

“In order for the classifier to recognize a stranger, a special pattern called an adversarial patch can be added to a photograph of a person,” explains Adversa. “This patch is generated by a special algorithm that picks up the pixel values in the photo to make the classifier produce the desired value. In our case, the picture makes the face recognition model see some famous person instead of Mona Lisa.”

To fool facial recognition systems in the real world, the attacker needs to get a doctored self-image into the facial recognition database. This could be done by hacking the database, or by socially engineering the photo acceptance procedure. As a new employee in a sensitive establishment that uses facial recognition in-house, technically a bad new employer could engineer his own image to be interpreted as the CEO. If he succeeds, all doors will be open since nothing is closed to the CEO.

Adversa’s purpose in this Mona Lisa exhibition is not simply to demonstrate that facial recognition AI can be subverted, but to make the wider point that AI itself can be subverted for malicious purposes. The big takeaway is that just because your AI system says that black is black, that may not be true.

As far as the exhibition is concerned, each Mona Lisa image can be tracked to the celebrity photo that facial recognition – as opposed to the human eye – will ‘see’.

Related: Cyber Insights 2022: Adversarial AI

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

Related: IRS to End Use of Facial Recognition to Identify Taxpayers

Related: Italy Fines US Facial Recognition Firm

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this webinar to learn best practices that organizations can use to improve both their resilience to new threats and their response times to incidents.

Register

Join this live webinar as we explore the potential security threats that can arise when third parties are granted access to a sensitive data or systems.

Register

Expert Insights

Related Content

Identity & Access

Zero trust is not a replacement for identity and access management (IAM), but is the extension of IAM principles from people to everyone and...

Identity & Access

Hackers rarely hack in anymore. They log in using stolen, weak, default, or otherwise compromised credentials. That’s why it’s so critical to break the...

Application Security

Fortinet on Monday issued an emergency patch to cover a severe vulnerability in its FortiOS SSL-VPN product, warning that hackers have already exploited the...

Application Security

Virtualization technology giant VMware on Tuesday shipped urgent updates to fix a trio of security problems in multiple software products, including a virtual machine...

Application Security

Password management firm LastPass says the hackers behind an August data breach stole a massive stash of customer data, including password vault data that...

Application Security

Microsoft on Tuesday pushed a major Windows update to address a security feature bypass already exploited in global ransomware attacks.The operating system update, released...

Application Security

Electric car maker Tesla is using the annual Pwn2Own hacker contest to incentivize security researchers to showcase complex exploit chains that can lead to...

Identity & Access

Strata Identity has raised $26 million in a Series B funding round led by Telstra Ventures, with additional investment from Forgepoint Capital, Innovating Capital,...