Due to the model aging problem, Deep Neural Networks (DNNs) need updates to adjust them to new data distributions. The common practice leverages incremental learning (IL), e.g., Class-based Incremental Learning (CIL) that updates output labels, to update the model with new data and a limited number of old data. This avoids heavyweight training (from scratch) using conventional methods and saves storage space by reducing the number of old data to store. But it also leads to poor performance in fairness. In this paper, we show that CIL suffers both dataset and algorithm bias problems, and existing solutions can only partially solve the problem. We propose a novel framework, CILIATE, that fixes both dataset and algorithm bias in CIL. It features a novel differential analysis guided dataset and training refinement process that identifies unique and important samples overlooked by existing CIL and enforces the model to learn from them. Through this process, CILIATE improves the fairness of CIL by 17.03%, 22.46%, and 31.79% compared to state-of-the-art methods, iCaRL, BiC, and WA, respectively, based on our evaluation on three popular datasets and widely used ResNet models. Our code is available at https://github.com/Antimony5292/CILIATE.

Wed 19 Jul

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
ISSTA 5: Improving Deep Learning SystemsTechnical Papers at Smith Classroom (Gates G10)
Chair(s): Michael Pradel University of Stuttgart
10:30
15m
Talk
Understanding and Tackling Label Errors in Deep Learning-Based Vulnerability Detection (Experience Paper)
Technical Papers
XuNie Huazhong University of Science and Technology; Beijing University of Posts and Telecommunications, Ningke Li Huazhong University of Science and Technology, Kailong Wang Huazhong University of Science and Technology, Shangguang Wang Beijing University of Posts and Telecommunications, Xiapu Luo Hong Kong Polytechnic University, Haoyu Wang Huazhong University of Science and Technology
DOI
10:45
15m
Talk
Improving Binary Code Similarity Transformer Models by Semantics-Driven Instruction Deemphasis
Technical Papers
Xiangzhe Xu Purdue University, Shiwei Feng Purdue University, Yapeng Ye Purdue University, Guangyu Shen Purdue University, Zian Su Purdue University, Siyuan Cheng Purdue University, Guanhong Tao Purdue University, Qingkai Shi Purdue University, Zhuo Zhang Purdue University, Xiangyu Zhang Purdue University
DOI
11:00
15m
Talk
CILIATE: Towards Fairer Class-Based Incremental Learning by Dataset and Training Refinement
Technical Papers
Xuanqi Gao Xi’an Jiaotong University, Juan Zhai University of Massachusetts Amherst, Shiqing Ma UMass Amherst, Chao Shen Xi’an Jiaotong University, Yufei Chen Xi’an Jiaotong University; City University of Hong Kong, Shiwei Wang Xi’an Jiaotong University
DOI Pre-print
11:15
15m
Talk
DeepAtash: Focused Test Generation for Deep Learning Systems
Technical Papers
Tahereh Zohdinasab USI Lugano, Vincenzo Riccio University of Udine, Paolo Tonella USI Lugano
DOI
11:30
15m
Talk
Systematic Testing of the Data-Poisoning Robustness of KNN
Technical Papers
Yannan Li University of Southern California, Jingbo Wang University of Southern California, Chao Wang University of Southern California
DOI
11:45
15m
Talk
Semantic-Based Neural Network Repair
Technical Papers
Richard Schumi Singapore Management University, Jun Sun Singapore Management University
DOI