Application Security

Google Launches Free Fuzzer Benchmarking Service

Google this week announced the launch of FuzzBench, a free and open source service for evaluating fuzzers.

The fully automated service was designed to allow for an easy but rigorous evaluation of fuzzing research, in an attempt to boost the adoption of fuzzing research – an important bug finding technique.

<p><strong><span><span>Google this week announced the launch of FuzzBench, a free and open source service for evaluating fuzzers.</span></span></strong></p><p><span><span>The fully automated service was designed to allow for an easy but rigorous evaluation of fuzzing research, in an attempt to boost the adoption of fuzzing research – an important bug finding technique.</span></span></p>

Google this week announced the launch of FuzzBench, a free and open source service for evaluating fuzzers.

The fully automated service was designed to allow for an easy but rigorous evaluation of fuzzing research, in an attempt to boost the adoption of fuzzing research – an important bug finding technique.

With the new service, Google wants to make it easier to evaluate how the numerous fuzzing tools and techniques that exist today generalize on a large set of real world programs.

Moreover, FuzzBench aims to overcome shortcomings in current research, such as the use of small and fixed sets of real world benchmarks, the use of few and short trials, or the lack of statistical tests.

According to Google, this state of facts isn’t surprising, considering the prohibitive costs of full-scale experiments.

“For example, a 24-hour, 10-trial, 10 fuzzer, 20 benchmark experiment would require 2,000 CPUs to complete in a day,” the Internet search giant says.

The new open source, free service aims to solve these issues by providing a framework for evaluating fuzzers in a reproducible way.

FuzzBench provides an API for integrating fuzzers, benchmarks from real-world projects, and a reporting library to produce reports with graphs and statistical tests.

Advertisement. Scroll to continue reading.

Researchers can simply integrate a fuzzer and the service automatically runs an experiment for 24 hours, using multiple trials and real world benchmarks. Then, FuzzBench will deliver a report to compare the performance of the fuzzer to similar tools and offer details on the strengths and weaknesses of each fuzzer.

Such integrations are simple and normally take less than 50 lines of code. Once integrated, the fuzzer can “fuzz almost all 250+ OSS-Fuzz projects out of the box,” Google says. Fuzzers such as AFL, LibFuzzer, Honggfuzz, and several academic projects such as QSYM and Eclipser have already been integrated with the service.

The reports FuzzBench produces include statistical tests and raw data, thus allowing researchers to do their own analysis.

“Performance is determined by the amount of covered program edges, though we plan on adding crashes as a performance metric,” Google says.

The Internet giant is looking to develop FuzzBench with community contributions and input, and invites members of the fuzzing research community to participate with their fuzzers and techniques, even those that are still under development.

Contributions of ideas and techniques for evaluating fuzzers are also welcomed, to develop best practices with the help of the community.

Related: Google Open Sources Fuzzing Platform

Related: Mozilla Introduces Grizzly Browser Fuzzing Framework

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version