Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

NIST: No Silver Bullet Against Adversarial Machine Learning Attacks

NIST has published guidance on adversarial machine learning (AML) attacks and mitigations, warning that there is no silver bullet.

NIST Adversarial Machine Learning

NIST has published a report on adversarial machine learning attacks and mitigations, and cautioned that there is no silver bullet for these types of threats. 

Adversarial machine learning, or AML, involves extracting information about the characteristics and behavior of a machine learning system, and manipulating inputs in order to obtain a desired outcome. 

NIST has published guidance documenting the various types of attacks that can be used to target artificial intelligence systems, warning AI developers and users that there is currently no foolproof method for protecting such systems. The agency has encouraged the community to attempt to find better defenses. 

The report, titled ‘Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations’ (NIST.AI.100-2), covers both predictive and generative AI. The former focuses on creating new content, while the latter uses historical data to forecast future outcomes.

NIST’s report, authored in collaboration with representatives of Northeastern University and Robust Intelligence Inc, focuses on four main types of attacks: evasion, poisoning, privacy, and abuse.

In the case of evasion attacks, which involve altering an input to change the system’s response, NIST provides an attack on autonomous vehicles as an example, such as creating confusing lane markings that could cause a car to veer off the road. 

In a poisoning attack, the attacker attempts to introduce corrupted data during the AI’s training. For example, getting a chatbot to use inappropriate language by planting numerous instances of such language into conversation records in an effort to get the AI to believe that it’s common parlance. 

Attackers can also attempt to compromise legitimate training data sources in what NIST describes as abuse attacks.

In privacy attacks, threat actors attempt to obtain valuable data about the AI or its training data by asking the chatbot numerous questions and using the provided answers to reverse engineer the model and find weaknesses.

Advertisement. Scroll to continue reading.

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” said NIST computer scientist Apostol Vassilev. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.” 

Joseph Thacker, principal AI engineer and security researcher at SaaS security firm AppOmni, commented on the new NIST report, describing it as “the best AI security publication” he has seen.

“What’s most noteworthy are the depth and coverage. It’s the most in-depth content about adversarial attacks on AI systems that I’ve encountered. It covers the different forms of prompt injection, elaborating and giving terminology for components that previously weren’t well-labeled. It even references prolific real-world examples like the DAN (Do Anything Now) jailbreak, and some amazing indirect prompt injection work,” Thacker said.

He added, “It includes multiple sections covering potential mitigations, but is clear about it not being a solved problem yet. It also covers the open vs closed model debate. There’s a helpful glossary at the end, which I personally plan to use as extra ‘context’ to large language models when writing or researching AI security. It will make sure the LLM and I are working with the same definitions specific to this subject domain.”

Troy Batterberry, CEO and founder of EchoMark, a company that protects sensitive information by embedding invisible forensic watermarks in documents and messages, also commented, “NIST’s adversarial ML report is a helpful tool for developers to better understand AI attacks. The taxonomy of attacks and suggested defenses underscores that there’s no one-size-fits-all solution against threats; however, understanding of how adversaries operate, and preparedness are critical keys to mitigating risk.”

“As a company that leverages AI and LLMs as part of our business, we understand and encourage this commitment to secure AI development, ensuring robust and trustworthy systems. Understanding and preparing for AI attacks is not just a technical issue but a strategic imperative necessary to maintain trust and integrity in increasingly AI-driven business solutions,” Batterberry added.

Related: UN Chief Appoints 39-Member Panel to Advise on International Governance of Artificial Intelligence

Related: New AI Safety Initiative Aims to Set Responsible Standards for Artificial Intelligence

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Gain valuable insights from industry professionals who will help guide you through the intricacies of industrial cybersecurity.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...