Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Eight Vulnerabilities Disclosed in the AI Development Supply Chain

Details of eight vulnerabilities found in the open source supply chain used to develop in-house AI and ML models have been disclosed. All have CVE numbers, one has critical severity, and seven have high severity.

AI vulnerabilities

Details of eight vulnerabilities found in the open source supply chain used to develop in-house AI and ML models have been disclosed by AI cybersecurity startup Protect AI. All have CVE numbers, one has critical severity, and seven have high severity.

The vulnerabilities are disclosed via Protect AI’s February Vulnerability Report. They are:

  • CVE-2023-6975: arbitrary file write in MLFLow, CVSS 9.8
  • CVE-2023-6753: arbitrary file write on Windows in MLFlow, CVSS 9.6
  • CVE-2023-6730: RCE in Hugging Face Transformers via RagRetriever.from_pretrained(), CVSS 9.0
  • CVE-2023-6940: server side template injection bypass in MLFlow, CVSS 9.0
  • CVE-2023-6976: arbitrary file upload patch bypass in MLFlow, CVSS 8.8
  • CVE-2023-31036: RCE via arbitrary file overwrite in Triton Inference Server, CVSS 7.5
  • CVE-2023-6909: local file inclusion in MLFlow, CVSS 7.5
  • CVE-2024-0964: LFI in Gradio, CVSS 7.5

The nature and vulnerability of open source code is well understood. For standard code development using OSS libraries, SBOMs are designed and used to provide some security surety. But SBOMs don’t work with open source used for AI/ML development.

[ Learn More at the 2024 AI Risk Summit – June 2024, Ritz-Carlton, Half Moon Bay ]

“Traditional SBOMs only tell you what’s happening in the code pipeline,” explained Daryan Dehghanpisheh, co-founder of Protect Ai (and former global leader for AI/ML solution architects at AWS). “When you build an AI application, you have three distinctly different pipelines where you must have full provenance. You have a code pipeline, you have a data pipeline, and more critically, you have the machine learning pipeline. That machine learning pipeline is where the AI/ML model is created. It relies on the data pipeline, and it feeds the code pipeline — but companies are blind to that middle machine learning pipeline.”

A diagram of a data flow

Description automatically generated

Protect AI has called for the development of an AI/ML BOM to supplement SBOMs (software) and PBOMs (product) in a separate blog: “The AI/ML BOM specifically targets the elements of AI and machine learning systems. It addresses risks unique to AI, such as data poisoning and model bias, and requires continuous updates due to the evolving nature of AI models.”

Absent this AI/ML BOM, in-house developers are reliant on either their own expertise or that of third parties (such as Protect AI) to discover how vulnerabilities can be manipulated within the hidden machine learning pipeline to introduce flaws to the final model before deployment.

Protect AI has two basic methods for AI/ML model vulnerability detection: scanning and bounty hunters. Its Guardian product introduced in January 2024 can use the output of its AI/ML scanner (ModelScan) to provide a secure gateway. But it is the firm’s community of independent bounty hunters that are particularly effective at discovering new vulnerabilities.

The firm launched what it calls the world’s first AI/ML bug bounty program — which it calls huntr – in August 2023. “We now have close to 16,000 members of our community,” said Dehghanpisheh. “When we first launched in August, I think we were getting maybe three a week. Now we get north of 15 a day.”

Advertisement. Scroll to continue reading.

The bounties are funded from Protect AI’s own capital (it raised $35 million in July 2023 just prior to launching huntr), but he believes it is money well-spent. Firstly, finding vulnerabilities highlights the complexity of a new technology that is neither well understood by the developers using the technology, nor the many traditional security companies who don’t understand the complexities of machine learning and artificial intelligence.

Secondly, the volume of vulnerabilities found and disclosed positions the firm as a leader in what he calls ‘AI/ML threat IQ’. “There are two components to this,” he continued. “The first is the Threat Reports on discovered vulnerabilities, and the second is our AI exploits tool that we give away for free.”

The third advantage is that having reached this position through the success of the huntr program, it is an effective lead generation tool. “The way I see it, paying those bounties is cheaper than paying a salesperson.”

The huntr program has proved successful in finding vulnerabilities and is likely to continue being successful. The monthly Threat Reports actively demonstrate the complexity of securing in-house developed ML/AI models by disclosing the vulnerabilities found by the hunters.

Related: Don’t Expect Quick Fixes in ‘Red-Teaming’ of AI Models. Security Was an Afterthought

Related: The Emerging Landscape of AI-Driven Cybersecurity Threats: A Look Ahead

Related: Addressing the State of AI’s Impact on Cyber Disinformation/Misinformation

Related: CISA Outlines AI-Related Cybersecurity Efforts

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Vulnerabilities

Less than a week after announcing that it would suspended service indefinitely due to a conflict with an (at the time) unnamed security researcher...

Data Breaches

OpenAI has confirmed a ChatGPT data breach on the same day a security firm reported seeing the use of a component affected by an...

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

IoT Security

A group of seven security researchers have discovered numerous vulnerabilities in vehicles from 16 car makers, including bugs that allowed them to control car...

Vulnerabilities

A researcher at IOActive discovered that home security systems from SimpliSafe are plagued by a vulnerability that allows tech savvy burglars to remotely disable...

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.