How Effective Are Neural Networks for Fixing Security Vulnerabilities
Security vulnerability repair is a difficult task that is in dire need of automation. Two groups of techniques have shown promise: (1) large code language models (LLMs) that have been pre-trained on source code for tasks such as code completion, and (2) automated program repair (APR) techniques that use deep learning (DL) models to automatically fix software bugs.
This paper is the first to study and compare Java vulnerability repair capabilities of LLMs and DL-based APR models. The contributions include that we (1) apply and evaluate five LLMs (Codex, CodeGen, CodeT5, PLBART and InCoder), four fine-tuned LLMs, and four DL-based APR techniques on two real-world Java vulnerability benchmarks (Vul4J and VJBench), (2) design code transformations to address the training and test data overlapping threat to Codex, (3) create a new Java vulnerability repair benchmark VJBench, and its transformed version VJBench-trans, to better evaluate LLMs and APR techniques, and (4) evaluate LLMs and APR techniques on the transformed vulnerabilities in VJBench-trans.
Our findings include that (1) existing LLMs and APR models fix very few Java vulnerabilities. Codex fixes 10.2 (20.4%), the most number of vulnerabilities. Many of the generated patches are uncompilable patches. (2) Fine-tuning with general APR data improves LLMs’ vulnerability-fixing capabilities. (3) Our new VJBench reveals that LLMs and APR models fail to fix many Common Weakness Enumeration (CWE) types, such as CWE-325 Missing cryptographic step and CWE-444 HTTP request smuggling. (4) Codex still fixes 8.3 transformed vulnerabilities, outperforming all the other LLMs and APR models on transformed vulnerabilities. The results call for innovations to enhance automated Java vulnerability repair such as creating larger vulnerability repair training data, tuning LLMs with such data, and applying code simplification transformation to facilitate vulnerability repair.
Wed 19 JulDisplayed time zone: Pacific Time (US & Canada) change
13:30 - 15:00 | ISSTA 7: Testing and Analysis of Machine-Learning SystemsTechnical Papers at Smith Classroom (Gates G10) Chair(s): Vincenzo Riccio University of Udine | ||
13:30 15mTalk | FairRec: Fairness Testing for Deep Recommender Systems Technical Papers Huizhong Guo Zhejiang University, Jinfeng Li Alibaba Group, Jingyi Wang Zhejiang University, Xiangyu Liu Alibaba Group, Dongxia Wang Zhejiang University, Zehong Hu Alibaba Group, Rong Zhang Alibaba Group, Hui Xue Alibaba Group DOI | ||
13:45 15mTalk | DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization Technical Papers Simin Chen University of Texas at Dallas, Shiyi Wei University of Texas at Dallas, Cong Liu University of California at Riverside, Wei Yang University of Texas at Dallas DOI | ||
14:00 15mTalk | Validating Multimedia Content Moderation Software via Semantic Fusion Technical Papers Wenxuan Wang Chinese University of Hong Kong, Jingyuan Huang Chinese University of Hong Kong, Chang Chen Chinese University of Hong Kong, Jiazhen Gu Chinese University of Hong Kong, Jianping Zhang Chinese University of Hong Kong, Weibin Wu Sun Yat-sen University, Pinjia He Chinese University of Hong Kong, Michael Lyu Chinese University of Hong Kong DOI | ||
14:15 15mTalk | What You See Is What You Get? It Is Not the Case! Detecting Misleading Icons for Mobile Applications Technical Papers Linlin Li Southern University of Science and Technology, Ruifeng Wang Northeastern University, Xian Zhan Southern University of Science and Technology, Ying Wang Northeastern University, Cuiyun Gao Harbin Institute of Technology, Sinan Wang Southern University of Science and Technology, Yepang Liu Southern University of Science and Technology DOI | ||
14:30 15mTalk | How Effective Are Neural Networks for Fixing Security Vulnerabilities Technical Papers Yi Wu Purdue University, Nan Jiang Purdue University, Hung Viet Pham University of Waterloo, Thibaud Lutellier University of Alberta, Jordan Davis Purdue University, Lin Tan Purdue University, Petr Babkin J.P. Morgan AI Research, Sameena Shah J.P. Morgan AI Research DOI | ||
14:45 15mTalk | ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-Based Systems Technical Papers Mingyi Zhou Monash University, Xiang Gao Beihang University, Jing Wu Monash University, John Grundy Monash University, Xiao Chen Monash University, Chunyang Chen Monash University, Li Li Beihang University DOI |