Deep learning-based recommender systems (DRSs) are increasingly and widely deployed in the industry, which brings significant convenience to people's daily life in different ways. However, recommender systems are also shown to suffer from multiple issues, e.g., the \textit{echo chamber} and the \textit{Matthew effect}, of which the notation of ``fairness'' plays a core role. For instance, the system may be regarded as unfair to 1) a specific user, if the user gets worse recommendations than other users, or 2) an item (to recommend), if the item is much less likely to be exposed to the users than other items. While many fairness notations and corresponding fairness testing approaches have been developed for traditional deep classification models, they are essentially hardly applicable to DRSs. One major challenge is that there still lacks a systematic understanding and mapping between the existing fairness notations and the diverse testing requirements for deep recommender systems, not to mention further testing or debugging activities. To address the gap, we propose FairRec, a unified framework that supports fairness testing of DRSs from multiple customized perspectives, e.g., model utility, item diversity, item popularity, etc. We also propose a novel, efficient search-based testing approach to tackle the new challenge, i.e., double-ended discrete particle swarm optimization (DPSO) algorithm, to effectively search for hidden fairness issues in the form of certain disadvantaged groups from a vast number of candidate groups. Given the testing report, by adopting a simple re-ranking mitigation strategy on these identified disadvantaged groups, we show that the fairness of DRSs can be significantly improved. We conducted extensive experiments on multiple industry-level DRSs adopted by leading companies. The results confirm that FairRec is effective and efficient in identifying the deeply hidden fairness issues, e.g., achieving $\sim$95% testing accuracy with $\sim$half to 1/8 time.

Wed 19 Jul

Displayed time zone: Pacific Time (US & Canada) change

13:30 - 15:00
ISSTA 7: Testing and Analysis of Machine-Learning SystemsTechnical Papers at Smith Classroom (Gates G10)
Chair(s): Vincenzo Riccio University of Udine
13:30
15m
Talk
FairRec: Fairness Testing for Deep Recommender Systems
Technical Papers
Huizhong Guo Zhejiang University, Jinfeng Li Alibaba Group, Jingyi Wang Zhejiang University, Xiangyu Liu Alibaba Group, Dongxia Wang Zhejiang University, Zehong Hu Alibaba Group, Rong Zhang Alibaba Group, Hui Xue Alibaba Group
DOI
13:45
15m
Talk
DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization
Technical Papers
Simin Chen University of Texas at Dallas, Shiyi Wei University of Texas at Dallas, Cong Liu University of California at Riverside, Wei Yang University of Texas at Dallas
DOI
14:00
15m
Talk
Validating Multimedia Content Moderation Software via Semantic Fusion
Technical Papers
Wenxuan Wang Chinese University of Hong Kong, Jingyuan Huang Chinese University of Hong Kong, Chang Chen Chinese University of Hong Kong, Jiazhen Gu Chinese University of Hong Kong, Jianping Zhang Chinese University of Hong Kong, Weibin Wu Sun Yat-sen University, Pinjia He Chinese University of Hong Kong, Michael Lyu Chinese University of Hong Kong
DOI
14:15
15m
Talk
What You See Is What You Get? It Is Not the Case! Detecting Misleading Icons for Mobile Applications
Technical Papers
Linlin Li Southern University of Science and Technology, Ruifeng Wang Northeastern University, Xian Zhan Southern University of Science and Technology, Ying Wang Northeastern University, Cuiyun Gao Harbin Institute of Technology, Sinan Wang Southern University of Science and Technology, Yepang Liu Southern University of Science and Technology
DOI
14:30
15m
Talk
How Effective Are Neural Networks for Fixing Security Vulnerabilities
Technical Papers
Yi Wu Purdue University, Nan Jiang Purdue University, Hung Viet Pham University of Waterloo, Thibaud Lutellier University of Alberta, Jordan Davis Purdue University, Lin Tan Purdue University, Petr Babkin J.P. Morgan AI Research, Sameena Shah J.P. Morgan AI Research
DOI
14:45
15m
Talk
ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-Based Systems
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Jing Wu Monash University, John Grundy Monash University, Xiao Chen Monash University, Chunyang Chen Monash University, Li Li Beihang University
DOI