Identity & Access

The Art Exhibition That Fools Facial Recognition Systems

The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.

<p><span><span><strong>The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.</strong></span></span></p>

The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.

The exhibition is the brainchild of Adversa, a startup company designed to find and help mitigate against the inevitable and exploitable insecurities in artificial intelligence. In this instance, the weaknesses in facial recognition systems are highlighted.

The exhibition is predicated on the concept of an NFT sale. Security professionals who might dismiss NFTs as popular contemporary gimmickry should not be put off – the concept is used merely to attract a wider public audience to the insecurity of facial recognition. The purpose of the exhibition is altogether more serious than NFTs.

The exhibition has 100 Mona Lisa images. “All look almost the same as the original one by da Vinci for people, though AI recognizes them as 100 different celebrities,” explains Adversa in a blog report. “Such perception differences are caused by the biases and security vulnerabilities of AI called adversarial examples, that can potentially be used by cybercriminals to hack facial recognition systems, autonomous cars, medical imaging, financial algorithms – or in fact any other AI technology.”

This demonstration of tricking AI is based on a library of 8,631 different celebrities taken from published photographs. The facial recognition model is FaceNet, trained on the most popular facial recognition dataset, VGGFace2. 

VGGFace2 is described as a dataset for recognizing faces across pose and age. It comprises more than 3 million images divided into more than 9,000 classes – making the dataset a popular choice for training deep learning models for facial recognition. 

FaceNet is Google’s face detection model. A June 2021 report by Analytics Vidhya noted, “We checked 4 deep learning models namely, FaceNet (Google), DeepFace (Facebook), VGGFace (Oxford), and OpenFace (CMU). Out of these 4 models FaceNet was giving us the best result. In general, FaceNet gives better results than all the other 3 models.”

This is no fabricated demonstration using purpose-designed techniques – it is genuinely representative of facial recognition AI in the real world. When viewing the exhibition, it is worth noting that not one of the images is the original Mona Lisa image – all are manipulated differently to make AI recognize a different celebrity while remaining as Mona Lisa to the human eye.

Advertisement. Scroll to continue reading.

“In order for the classifier to recognize a stranger, a special pattern called an adversarial patch can be added to a photograph of a person,” explains Adversa. “This patch is generated by a special algorithm that picks up the pixel values in the photo to make the classifier produce the desired value. In our case, the picture makes the face recognition model see some famous person instead of Mona Lisa.”

To fool facial recognition systems in the real world, the attacker needs to get a doctored self-image into the facial recognition database. This could be done by hacking the database, or by socially engineering the photo acceptance procedure. As a new employee in a sensitive establishment that uses facial recognition in-house, technically a bad new employer could engineer his own image to be interpreted as the CEO. If he succeeds, all doors will be open since nothing is closed to the CEO.

Adversa’s purpose in this Mona Lisa exhibition is not simply to demonstrate that facial recognition AI can be subverted, but to make the wider point that AI itself can be subverted for malicious purposes. The big takeaway is that just because your AI system says that black is black, that may not be true.

As far as the exhibition is concerned, each Mona Lisa image can be tracked to the celebrity photo that facial recognition – as opposed to the human eye – will ‘see’.

Related: Cyber Insights 2022: Adversarial AI

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

Related: IRS to End Use of Facial Recognition to Identify Taxpayers

Related: Italy Fines US Facial Recognition Firm

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version