The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.
The exhibition is the brainchild of Adversa, a startup company designed to find and help mitigate against the inevitable and exploitable insecurities in artificial intelligence. In this instance, the weaknesses in facial recognition systems are highlighted.
The exhibition is predicated on the concept of an NFT sale. Security professionals who might dismiss NFTs as popular contemporary gimmickry should not be put off – the concept is used merely to attract a wider public audience to the insecurity of facial recognition. The purpose of the exhibition is altogether more serious than NFTs.
The exhibition has 100 Mona Lisa images. “All look almost the same as the original one by da Vinci for people, though AI recognizes them as 100 different celebrities,” explains Adversa in a blog report. “Such perception differences are caused by the biases and security vulnerabilities of AI called adversarial examples, that can potentially be used by cybercriminals to hack facial recognition systems, autonomous cars, medical imaging, financial algorithms – or in fact any other AI technology.”
This demonstration of tricking AI is based on a library of 8,631 different celebrities taken from published photographs. The facial recognition model is FaceNet, trained on the most popular facial recognition dataset, VGGFace2.
VGGFace2 is described as a dataset for recognizing faces across pose and age. It comprises more than 3 million images divided into more than 9,000 classes – making the dataset a popular choice for training deep learning models for facial recognition.
FaceNet is Google’s face detection model. A June 2021 report by Analytics Vidhya noted, “We checked 4 deep learning models namely, FaceNet (Google), DeepFace (Facebook), VGGFace (Oxford), and OpenFace (CMU). Out of these 4 models FaceNet was giving us the best result. In general, FaceNet gives better results than all the other 3 models.”
This is no fabricated demonstration using purpose-designed techniques – it is genuinely representative of facial recognition AI in the real world. When viewing the exhibition, it is worth noting that not one of the images is the original Mona Lisa image – all are manipulated differently to make AI recognize a different celebrity while remaining as Mona Lisa to the human eye.
“In order for the classifier to recognize a stranger, a special pattern called an adversarial patch can be added to a photograph of a person,” explains Adversa. “This patch is generated by a special algorithm that picks up the pixel values in the photo to make the classifier produce the desired value. In our case, the picture makes the face recognition model see some famous person instead of Mona Lisa.”
To fool facial recognition systems in the real world, the attacker needs to get a doctored self-image into the facial recognition database. This could be done by hacking the database, or by socially engineering the photo acceptance procedure. As a new employee in a sensitive establishment that uses facial recognition in-house, technically a bad new employer could engineer his own image to be interpreted as the CEO. If he succeeds, all doors will be open since nothing is closed to the CEO.
Adversa’s purpose in this Mona Lisa exhibition is not simply to demonstrate that facial recognition AI can be subverted, but to make the wider point that AI itself can be subverted for malicious purposes. The big takeaway is that just because your AI system says that black is black, that may not be true.
As far as the exhibition is concerned, each Mona Lisa image can be tracked to the celebrity photo that facial recognition – as opposed to the human eye – will ‘see’.
Related: Cyber Insights 2022: Adversarial AI
Related: Becoming Elon Musk – the Danger of Artificial Intelligence
Related: IRS to End Use of Facial Recognition to Identify Taxpayers

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.
More from Kevin Townsend
- Verosint Launches Account Fraud Detection and Prevention Platform
- News Analysis: UK Commits $3 Billion to Support National Quantum Strategy
- Meta Develops New Kill Chain Thesis
- The Rise of the BISO in Contemporary Cybersecurity
- ChatGPT and the Growing Threat of Bring Your Own AI to the SOC
- Euler Loses Nearly $200 Million to Flash Loan Attack
- QuSecure Unveils Quantum-Resilient Communications Satellite Link
- Pre-Deepfake Campaign Targets Putin Critics
Latest News
- Spain Needs More Transparency Over Pegasus: EU Lawmakers
- Ransomware Will Likely Target OT Systems in EU Transport Sector: ENISA
- Virtual Event Today: Supply Chain & Third-Party Risk Summit
- Google Suspends Chinese Shopping App Amid Security Concerns
- Verosint Launches Account Fraud Detection and Prevention Platform
- Ransomware Gang Publishes Data Allegedly Stolen From Maritime Firm Royal Dirkzwager
- Zoom Paid Out $3.9 Million in Bug Bounties in 2022
- Oleria Scores $8M Seed Funding for ID Authentication Technology
