Semantic-Based Neural Network Repair
Recently, neural networks have spread into numerous fields including many safety-critical systems. Neural networks are built (and trained) by programming in frameworks such as TensorFlow and PyTorch. Developers apply a rich set of pre-defined layers to manually program neural networks or to automatically generate them (e.g., through AutoML). Composing neural networks with different layers is error-prone due to the non-trivial constraints that must be satisfied in order to use those layers. In this work, we propose an approach to automatically repair erroneous neural networks. The challenge is in identifying a minimal modification to the network so that it becomes valid. Modifying a layer might have cascading effects on subsequent layers and thus our approach must search recursively to identify a ''globally'' minimal modification. Our approach is based on an executable semantics of deep learning layers and focuses on four kinds of errors which are common in practice. We evaluate our approach for two usage scenarios, i.e., repairing automatically generated neural networks and manually written ones suffering from common model bugs. The results show that we are able to repair 100% of a set of randomly generated neural networks (which are produced with an existing AI framework testing approach) effectively and efficiently (with an average repair time of 21.08s) and 93.75% of a collection of real neural network bugs (with an average time of 3min 40s).
Wed 19 JulDisplayed time zone: Pacific Time (US & Canada) change
10:30 - 12:00 | ISSTA 5: Improving Deep Learning SystemsTechnical Papers at Smith Classroom (Gates G10) Chair(s): Michael Pradel University of Stuttgart | ||
10:30 15mTalk | Understanding and Tackling Label Errors in Deep Learning-Based Vulnerability Detection (Experience Paper) Technical Papers XuNie Huazhong University of Science and Technology; Beijing University of Posts and Telecommunications, Ningke Li Huazhong University of Science and Technology, Kailong Wang Huazhong University of Science and Technology, Shangguang Wang Beijing University of Posts and Telecommunications, Xiapu Luo Hong Kong Polytechnic University, Haoyu Wang Huazhong University of Science and Technology DOI | ||
10:45 15mTalk | Improving Binary Code Similarity Transformer Models by Semantics-Driven Instruction Deemphasis Technical Papers Xiangzhe Xu Purdue University, Shiwei Feng Purdue University, Yapeng Ye Purdue University, Guangyu Shen Purdue University, Zian Su Purdue University, Siyuan Cheng Purdue University, Guanhong Tao Purdue University, Qingkai Shi Purdue University, Zhuo Zhang Purdue University, Xiangyu Zhang Purdue University DOI | ||
11:00 15mTalk | CILIATE: Towards Fairer Class-Based Incremental Learning by Dataset and Training Refinement Technical Papers Xuanqi Gao Xi’an Jiaotong University, Juan Zhai University of Massachusetts Amherst, Shiqing Ma UMass Amherst, Chao Shen Xi’an Jiaotong University, Yufei Chen Xi’an Jiaotong University; City University of Hong Kong, Shiwei Wang Xi’an Jiaotong University DOI Pre-print | ||
11:15 15mTalk | DeepAtash: Focused Test Generation for Deep Learning Systems Technical Papers DOI | ||
11:30 15mTalk | Systematic Testing of the Data-Poisoning Robustness of KNN Technical Papers Yannan Li University of Southern California, Jingbo Wang University of Southern California, Chao Wang University of Southern California DOI | ||
11:45 15mTalk | Semantic-Based Neural Network Repair Technical Papers DOI |