Application Security

Google Open Sources AI-Aided Fuzzing Framework

Google has released its fuzzing framework in open source to boost the ability of developers and researchers to identify vulnerabilities.

Google has released its fuzzing framework in open source to boost the ability of developers and researchers to identify vulnerabilities.

In an effort to help developers and researchers find vulnerabilities faster, Google has released its AI-aided fuzzing framework in open source.

The tool leverages large language models (LLM) to generate fuzz targets for real-world C and C++ projects and benchmarks them using Google’s OSS-Fuzz service, which has long been the top resource for the automated discovery of vulnerabilities in open source software.

To automate certain aspects of manual fuzz testing, the internet giant started using LLMs in August 2023 “to write project-specific code to boost fuzzing coverage and find more vulnerabilities”, which resulted in a 30% increase in code coverage on more than 300 OSS-Fuzz C/C++ projects.

“So far, the expanded fuzzing coverage offered by LLM-generated improvements allowed OSS-Fuzz to discover two new vulnerabilities in cJSON and libplist, two widely used projects that had already been fuzzed for years,” Google says.

The open sourced tool includes support for Vertex AI code-bison, Vertex AI code-bison-32k, Gemini Pro, OpenAI GPT-3.5-turbo, and OpenAI GPT-4.

Furthermore, Google says, the tool evaluates generated fuzz targets against up-to-date data from production environment using four metrics, namely compilability, runtime crashes, runtime coverage, and runtime line coverage differences against existing human-written fuzz targets in OSS-Fuzz.

“Overall, this framework manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets,” Google notes.

The open sourced framework allows researchers and developers to experiment with their own prompts to test the effectiveness of the generated fuzz targets and measure the results against OSS-Fuzz C/C++ projects.

Advertisement. Scroll to continue reading.

In addition to fuzzing for vulnerability discovery, Google is looking at ways to use LLMs for vulnerability patching, and has already proposed a project for building an automated pipeline for LLMs generating and testing fixes.

“This AI-powered patching approach resolved 15% of the targeted bugs, leading to significant time savings for engineers. The potential of this technology should apply to most or all categories throughout the software development process,” Google says.

Related: Security Experts Describe AI Technologies They Want to See

Related: OpenAI Turns to Security to Sell ChatGPT Enterprise

Related: Google Shells Out $600,000 for OSS-Fuzz Project Integrations

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Malware & Threats

Researchers can earn as much as $450,000 for a single vulnerability report as Google boosts its mobile vulnerability rewards program.

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version