Artificial Intelligence

Becoming Elon Musk – the Danger of Artificial Intelligence

A Tel Aviv, Israel-based artificial intelligence (AI) firm, with a mission to build trust in AI and protect AI from cyber threats, privacy issues, and safety incidents, has developed the opposite: an attack against facial recognition systems that can fool the algorithm into misinterpreting the image.

<p><span><span><strong>A Tel Aviv, Israel-based artificial intelligence (AI) firm, with a mission to build trust in AI and protect AI from cyber threats, privacy issues, and safety incidents, has developed the opposite: an attack against facial recognition systems that can fool the algorithm into misinterpreting the image.</strong></span></span></p>

A Tel Aviv, Israel-based artificial intelligence (AI) firm, with a mission to build trust in AI and protect AI from cyber threats, privacy issues, and safety incidents, has developed the opposite: an attack against facial recognition systems that can fool the algorithm into misinterpreting the image.

In brief, Adversa.ai has developed a black box attack that fooled PimEyes (described by the Washington Post as one of the most capable face-searching tools on the planet, with a database of more than 900 million images – but it could have been any facial recognition system) into believing that CEO and co-founder Alex Polyakov (it could have been anyone) is actually Elon Musk (it could have been anyone else). Adversa calls this attack ‘Adversarial Octopus’ because like the octopus, it is adaptable, stealthy, and precise.

Adversa CTO and co-founder Eugene Neelou explained the rationale to SecurityWeek. “It’s important to raise awareness on the topic of AI algorithms’ security in general – be it either facial recognition, as in this case, or any other AI-driven application: from internet platforms and social networks to autonomous cars, voice assistance and so on. All of them are based on deep learning, which has fundamental problems in terms of security.”

When AI and machine learning were first introduced into cybersecurity, they were often perceived – and sometimes sold – as the silver bullet to end hacking. This was never going to be true. History has shown that whenever a new technology is introduced into computing, it is rapidly followed by attacks against or using that same technology.

“[The adversarial octopus] method was developed during one of our AI Red Team projects that involved creating multiple methods to bypass AI-based facial recognition solutions,” continued Neelou. “We decided to check how applicable our attack was against large-scale internet applications, and the attack worked surprisingly well on PimEyes (among other platforms).”

Awareness of the threat to and from AI has grown rapidly over the last few years. In November 2019, Microsoft published a paper titled Failure Modes in Machine Learning, which includes a surprisingly lengthy taxonomy of both intentionally-motivated and unintentional failures in AI systems.

More recently, Adversa.ai published its own report on April 21, 2021 titled The Road to Secure and Trusted AI. It notes, “The recent exponential growth of AI has motivated governments, academia and industry to publish more research in the past 2 years than in the previous 2 decades.” But despite this growing awareness of the potential for AI abuse, there is little public knowledge of what it could entail – beyond the deep fakery largely used by the porn industry.

Adversarial Octopus was built to change this. “Our goal was to highlight the problem of securing mission-critical AI systems in real-world scenarios,” explained Neelou. “That’s why we have developed our advanced attack method to be applicable in real settings, and facial recognition is one of our research targets.”

Advertisement. Scroll to continue reading.

The attack method selected was to inject noise into an image. “Fooling AI systems by injecting noise is not something new. This type of attack is called Evasion and is the most popular way to exploit AI systems,” he continued. “Thousands of Evasion attacks are documented in academia. However, most of them focus on generic image classification and have various limitations. Adversarial Octopus has a number of differences, but the most important one is that it doesn’t require any knowledge about the AI algorithm.”

Simplifying – perhaps over-simplifying – the process, it involved two photographs of Polyakov. One original was uploaded to PimEyes to ensure the face was in the database. The second had Adversa’s noise injected into the image, designed to make facial recognition algorithms recognize the photo as Tesla and SpaceX CEO, Elon Musk. Visually, the image was still unadulterated Polyakov – but when PimEyes detected it on the internet, it ingested the image and interpreted it as Elon Musk. 

Adversarial Octopus turns theory into reality. “There are known incidents with presentation attacks and deepfakes against biometric security,” said Neelou. “Our attack method could be an alternative way to perform such attacks. However, we think that making fake digital identities can also be a lucrative target for fraudsters. We see the threat as broader than just biometry – there are many AI algorithms making critical decisions based on photos. And they can be a bigger target.”

An associated blog notes, “Hacktivists may wreak havoc in the AI-driven internet platforms that use face properties as input for any decisions or further training. Attackers can poison or evade the algorithms of big Internet companies by manipulating their profile pictures.

“Cybercriminals can steal personal identities and bypass AI-driven biometric authentication or identity verification systems in banks, trading platforms, or other services that offer authenticated remote assistance. This attack can be even more stealthy in every scenario where traditional deepfakes can be applied.

“Terrorists or dissidents may secretly use it to hide their internet activities in social media from law enforcement. It resembles a mask or fake identity for the virtual world we currently live in.”

The problem, believes Neelou, is that all deep learning algorithms are fundamentally vulnerable, and there are currently no reliable and universal defenses. Adversa has tested open-source AI models and online APIs for AI and found most of them vulnerable. If facial recognition vendors rely on AI as their core technology, it would probably be the weakest link in their product security today since the threat is new.

“Unlike traditional software, every AI system is unique, and there are no universal security patches,” Neelou told SecurityWeek. “Companies must incorporate security testing in their AI development processes – that is, AI Red Teaming. They should also integrate their practices into a cybersecurity lifecycle that includes prediction, prevention, detection, and response capabilities. It’s up to organizations to decide if they want to secure their AI systems by doing everything manually, using open-source tools, or paying for commercial solutions.” But the reality is, AI systems need to be more secure than they currently are – which is what Adversarial Octopus was designed to demonstrate.

For now, Adversa is not releasing details of the attack methodology. But, “As research scientists focused on securing AI, we plan to release a whitepaper with technical details of our attack methods,” said Neelou. At that point, the cat will not simply be among the pigeons, because of Adversarial Octopus, the cat might appear to be one of the pigeons.

Related: Are AI and Machine Learning Just a Temporary Advantage to Defenders?

Related: The Malicious Use of Artificial Intelligence in Cybersecurity

Related: IBM Describes AI-powered Malware That Can Hide Inside Benign Applications

Related: EU Proposes Rules for Artificial Intelligence to Limit Risks

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version