Ounjai, J., Wüstholz, V., & Christaki, M. (2023). Green Fuzzer Benchmarking. In ISSTA 2023: Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis (pp. 1396–1406). Association for Computing Machinery. https://doi.org/10.1145/3597926.3598144
ISSTA 2023: 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis
17-Jul-2023 - 21-Jul-2023
Seattle, United States of America (the)
Number of Pages:
Association for Computing Machinery, New York
fuzzing; testing; benchmarking
Over the last decade, fuzzing has been increasingly gaining traction due to its effectiveness in finding bugs. Nevertheless, fuzzer evaluations have been challenging during this time, mainly due to lack of standardized benchmarking. Aiming to alleviate this issue, in 2020, Google released FuzzBench, an open-source benchmarking platform, that is widely used for accurate fuzzer benchmarking.
However, a typical FuzzBench experiment takes CPU years to run. If we additionally consider that fuzzers under active development evaluate any changes empirically, benchmarking becomes prohibitive both in terms of computational resources and time. In this paper, we propose GreenBench, a greener benchmarking platform that, compared to FuzzBench, significantly speeds up fuzzer evaluations while maintaining very high accuracy.
In contrast to FuzzBench, GreenBench drastically increases the number of benchmarks while drastically decreasing the duration of fuzzing campaigns. As a result, the fuzzer rankings generated by GreenBench are almost as accurate as those by FuzzBench (with very high correlation), but GreenBench is from 18 to 61 times faster. We discuss the implications of these findings for the fuzzing community.
Testing Program Analyzers Ad Absurdum: 101076510 (European Commission)