Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Becoming Elon Musk – the Danger of Artificial Intelligence

A Tel Aviv, Israel-based artificial intelligence (AI) firm, with a mission to build trust in AI and protect AI from cyber threats, privacy issues, and safety incidents, has developed the opposite: an attack against facial recognition systems that can fool the algorithm into misinterpreting the image.

A Tel Aviv, Israel-based artificial intelligence (AI) firm, with a mission to build trust in AI and protect AI from cyber threats, privacy issues, and safety incidents, has developed the opposite: an attack against facial recognition systems that can fool the algorithm into misinterpreting the image.

In brief, Adversa.ai has developed a black box attack that fooled PimEyes (described by the Washington Post as one of the most capable face-searching tools on the planet, with a database of more than 900 million images – but it could have been any facial recognition system) into believing that CEO and co-founder Alex Polyakov (it could have been anyone) is actually Elon Musk (it could have been anyone else). Adversa calls this attack ‘Adversarial Octopus’ because like the octopus, it is adaptable, stealthy, and precise.

Adversa CTO and co-founder Eugene Neelou explained the rationale to SecurityWeek. “It’s important to raise awareness on the topic of AI algorithms’ security in general – be it either facial recognition, as in this case, or any other AI-driven application: from internet platforms and social networks to autonomous cars, voice assistance and so on. All of them are based on deep learning, which has fundamental problems in terms of security.”

When AI and machine learning were first introduced into cybersecurity, they were often perceived – and sometimes sold – as the silver bullet to end hacking. This was never going to be true. History has shown that whenever a new technology is introduced into computing, it is rapidly followed by attacks against or using that same technology.

“[The adversarial octopus] method was developed during one of our AI Red Team projects that involved creating multiple methods to bypass AI-based facial recognition solutions,” continued Neelou. “We decided to check how applicable our attack was against large-scale internet applications, and the attack worked surprisingly well on PimEyes (among other platforms).”

Awareness of the threat to and from AI has grown rapidly over the last few years. In November 2019, Microsoft published a paper titled Failure Modes in Machine Learning, which includes a surprisingly lengthy taxonomy of both intentionally-motivated and unintentional failures in AI systems.

More recently, Adversa.ai published its own report on April 21, 2021 titled The Road to Secure and Trusted AI. It notes, “The recent exponential growth of AI has motivated governments, academia and industry to publish more research in the past 2 years than in the previous 2 decades.” But despite this growing awareness of the potential for AI abuse, there is little public knowledge of what it could entail – beyond the deep fakery largely used by the porn industry.

Adversarial Octopus was built to change this. “Our goal was to highlight the problem of securing mission-critical AI systems in real-world scenarios,” explained Neelou. “That’s why we have developed our advanced attack method to be applicable in real settings, and facial recognition is one of our research targets.”

Advertisement. Scroll to continue reading.

The attack method selected was to inject noise into an image. “Fooling AI systems by injecting noise is not something new. This type of attack is called Evasion and is the most popular way to exploit AI systems,” he continued. “Thousands of Evasion attacks are documented in academia. However, most of them focus on generic image classification and have various limitations. Adversarial Octopus has a number of differences, but the most important one is that it doesn’t require any knowledge about the AI algorithm.”

Simplifying – perhaps over-simplifying – the process, it involved two photographs of Polyakov. One original was uploaded to PimEyes to ensure the face was in the database. The second had Adversa’s noise injected into the image, designed to make facial recognition algorithms recognize the photo as Tesla and SpaceX CEO, Elon Musk. Visually, the image was still unadulterated Polyakov – but when PimEyes detected it on the internet, it ingested the image and interpreted it as Elon Musk. 

hacking facial recognition

Adversarial Octopus turns theory into reality. “There are known incidents with presentation attacks and deepfakes against biometric security,” said Neelou. “Our attack method could be an alternative way to perform such attacks. However, we think that making fake digital identities can also be a lucrative target for fraudsters. We see the threat as broader than just biometry – there are many AI algorithms making critical decisions based on photos. And they can be a bigger target.”

An associated blog notes, “Hacktivists may wreak havoc in the AI-driven internet platforms that use face properties as input for any decisions or further training. Attackers can poison or evade the algorithms of big Internet companies by manipulating their profile pictures.

“Cybercriminals can steal personal identities and bypass AI-driven biometric authentication or identity verification systems in banks, trading platforms, or other services that offer authenticated remote assistance. This attack can be even more stealthy in every scenario where traditional deepfakes can be applied.

“Terrorists or dissidents may secretly use it to hide their internet activities in social media from law enforcement. It resembles a mask or fake identity for the virtual world we currently live in.”

The problem, believes Neelou, is that all deep learning algorithms are fundamentally vulnerable, and there are currently no reliable and universal defenses. Adversa has tested open-source AI models and online APIs for AI and found most of them vulnerable. If facial recognition vendors rely on AI as their core technology, it would probably be the weakest link in their product security today since the threat is new.

“Unlike traditional software, every AI system is unique, and there are no universal security patches,” Neelou told SecurityWeek. “Companies must incorporate security testing in their AI development processes – that is, AI Red Teaming. They should also integrate their practices into a cybersecurity lifecycle that includes prediction, prevention, detection, and response capabilities. It’s up to organizations to decide if they want to secure their AI systems by doing everything manually, using open-source tools, or paying for commercial solutions.” But the reality is, AI systems need to be more secure than they currently are – which is what Adversarial Octopus was designed to demonstrate.

For now, Adversa is not releasing details of the attack methodology. But, “As research scientists focused on securing AI, we plan to release a whitepaper with technical details of our attack methods,” said Neelou. At that point, the cat will not simply be among the pigeons, because of Adversarial Octopus, the cat might appear to be one of the pigeons.

Related: Are AI and Machine Learning Just a Temporary Advantage to Defenders?

Related: The Malicious Use of Artificial Intelligence in Cybersecurity

Related: IBM Describes AI-powered Malware That Can Hide Inside Benign Applications

Related: EU Proposes Rules for Artificial Intelligence to Limit Risks

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

Attack detection firm Vectra AI has appointed Jeff Reed to the newly created role of Chief Product Officer.

More People On The Move

Expert Insights

Related Content

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Data Protection

The cryptopocalypse is the point at which quantum computing becomes powerful enough to use Shor’s algorithm to crack PKI encryption.

Cybercrime

As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Cybercrime

Luxury retailer Neiman Marcus Group informed some customers last week that their online accounts had been breached by hackers.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Cybercrime

Zendesk is informing customers about a data breach that started with an SMS phishing campaign targeting the company’s employees.

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.