Tue 18 Jul 2023 10:45 - 11:00 at Smith Classroom (Gates G10) - ISSTA 2: Fuzzing 1 Chair(s): Jonathan Bell

Fuzzing is a widely used automated testing technique that uses random inputs to provoke program crashes indicating security breaches. A difficult but important question is when to stop a fuzzing campaign. Usually, a campaign is terminated when the number of crashes and/or covered code elements has not increased over a certain period of time. To avoid premature termination when a ramp-up time is needed before vulnerabilities are reached, code coverage is often preferred over crash count to decide when to terminate a campaign. However, a campaign might only increase the coverage on non-security-critical code or repeatedly trigger the same crashes. For these reasons, both code coverage and crash count tend to overestimate the fuzzing effectiveness, unnecessarily increasing the duration and thus the cost of the testing process.

The present paper explores the tradeoff between the amount of saved fuzzing time and number of missed bugs when stopping campaigns based on the saturation of covered, potentially vulnerable functions rather than triggered crashes or regular function coverage. In a large-scale empirical evaluation of 30 open-source C programs with a total of 240 security bugs and 1,280 fuzzing campaigns, we first show that binary classification models trained on software with known vulnerabilities (CVEs), using lightweight machine learning features derived from findings of static application security testing tools and proven software metrics, can reliably predict (potentially) vulnerable functions. Second, we show that our proposed stopping criterion terminates 24-hour fuzzing campaigns 6-12 hours earlier than the saturation of crashes and regular function coverage while missing (on average) fewer than 0.5 out of 12.5 contained bugs.

Tue 18 Jul

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
ISSTA 2: Fuzzing 1Technical Papers at Smith Classroom (Gates G10)
Chair(s): Jonathan Bell Northeastern University
10:30
15m
Talk
Icicle: A Re-designed Emulator for Grey-Box Firmware Fuzzing
Technical Papers
Michael Chesser University of Adelaide, Surya Nepal CSIRO’s Data61, Damith C. Ranasinghe University of Adelaide
DOI
10:45
15m
Talk
Green Fuzzing: A Saturation-Based Stopping Criterion using Vulnerability Prediction
Technical Papers
Stephan Lipp TU Munich, Daniel Elsner TU Munich, Severin Kacianka TU Munich, Alexander Pretschner TU Munich, Marcel Böhme MPI-SP; Monash University, Sebastian Banescu TU Munich
DOI
11:00
15m
Talk
ItyFuzz: Snapshot-Based Fuzzer for Smart Contract
Technical Papers
Chaofan Shou University of California at Santa Barbara, Shangyin Tan University of California at Berkeley, Koushik Sen University of California at Berkeley
DOI
11:15
15m
Talk
Large Language Models Are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models
Technical Papers
Yinlin Deng University of Illinois at Urbana-Champaign, Chunqiu Steven Xia University of Illinois at Urbana-Champaign, Haoran Peng University of Science and Technology of China, Chenyuan Yang University of Illinois at Urbana-Champaign, Lingming Zhang University of Illinois at Urbana-Champaign
DOI
11:30
15m
Talk
Detecting State Inconsistency Bugs in DApps via On-Chain Transaction Replay and Fuzzing
Technical Papers
Mingxi Ye Sun Yat-sen University, Yuhong Nan Sun Yat-sen University, Zibin Zheng Sun Yat-sen University, Dongpeng Wu Sun Yat-sen University, Huizhong Li WeBank
DOI
11:45
15m
Talk
Green Fuzzer Benchmarking
Technical Papers
DOI