Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Application Security

Google Open Sources AI-Aided Fuzzing Framework

Google has released its fuzzing framework in open source to boost the ability of developers and researchers to identify vulnerabilities.

In an effort to help developers and researchers find vulnerabilities faster, Google has released its AI-aided fuzzing framework in open source.

The tool leverages large language models (LLM) to generate fuzz targets for real-world C and C++ projects and benchmarks them using Google’s OSS-Fuzz service, which has long been the top resource for the automated discovery of vulnerabilities in open source software.

To automate certain aspects of manual fuzz testing, the internet giant started using LLMs in August 2023 “to write project-specific code to boost fuzzing coverage and find more vulnerabilities”, which resulted in a 30% increase in code coverage on more than 300 OSS-Fuzz C/C++ projects.

“So far, the expanded fuzzing coverage offered by LLM-generated improvements allowed OSS-Fuzz to discover two new vulnerabilities in cJSON and libplist, two widely used projects that had already been fuzzed for years,” Google says.

The open sourced tool includes support for Vertex AI code-bison, Vertex AI code-bison-32k, Gemini Pro, OpenAI GPT-3.5-turbo, and OpenAI GPT-4.

Furthermore, Google says, the tool evaluates generated fuzz targets against up-to-date data from production environment using four metrics, namely compilability, runtime crashes, runtime coverage, and runtime line coverage differences against existing human-written fuzz targets in OSS-Fuzz.

“Overall, this framework manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets,” Google notes.

The open sourced framework allows researchers and developers to experiment with their own prompts to test the effectiveness of the generated fuzz targets and measure the results against OSS-Fuzz C/C++ projects.

Advertisement. Scroll to continue reading.

In addition to fuzzing for vulnerability discovery, Google is looking at ways to use LLMs for vulnerability patching, and has already proposed a project for building an automated pipeline for LLMs generating and testing fixes.

“This AI-powered patching approach resolved 15% of the targeted bugs, leading to significant time savings for engineers. The potential of this technology should apply to most or all categories throughout the software development process,” Google says.

Related: Security Experts Describe AI Technologies They Want to See

Related: OpenAI Turns to Security to Sell ChatGPT Enterprise

Related: Google Shells Out $600,000 for OSS-Fuzz Project Integrations

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Application Security

Virtualization technology giant VMware on Tuesday shipped urgent updates to fix a trio of security problems in multiple software products, including a virtual machine...