The exponential growth of social media platforms, such as Facebook, Instagram, Youtube, and TikTok, has revolutionized communication and content publication in human society. Users on these platforms can publish multimedia content that delivers information via the combination of text, audio, images, and video. Meanwhile, the multimedia content release facility has been increasingly exploited to propagate toxic content, such as hate speech, malicious advertisement, and pornography. To this end, content moderation software has been widely deployed on these platforms to detect and blocks toxic content. However, due to the complexity of content moderation models and the difficulty of understanding information across multiple modalities, existing content moderation software can fail to detect toxic content, which often leads to extremely negative impacts (e.g., harmful effects on teen mental health).
We introduce Semantic Fusion, a general, effective methodology for validating multimedia content moderation software. Our key idea is to fuse two or more existing single-modal inputs (e.g., a textual sentence and an image) into a new input that combines the semantics of its ancestors in a novel manner and has toxic nature by construction. This fused input is then used for validating multimedia content moderation software. We realized Semantic Fusion as DUO, a practical content moderation software testing tool. In our evaluation, we employ DUO to test five commercial content moderation software and two state-of-the-art models against three kinds of toxic contents. The results show that DUO achieves up to 100% error finding rate (EFR) when testing moderation software and it obtains up to 94.1% EFR when testing the state-of-the-art models. In addition, we leverage the test cases generated by DUO to retrain the two models we explored, which largely improves model robustness (2.5%∼5.7% EFR) while maintaining the accuracy on the original test set.

Wed 19 Jul

Displayed time zone: Pacific Time (US & Canada) change

13:30 - 15:00
ISSTA 7: Testing and Analysis of Machine-Learning SystemsTechnical Papers at Smith Classroom (Gates G10)
Chair(s): Vincenzo Riccio University of Udine
13:30
15m
Talk
FairRec: Fairness Testing for Deep Recommender Systems
Technical Papers
Huizhong Guo Zhejiang University, Jinfeng Li Alibaba Group, Jingyi Wang Zhejiang University, Xiangyu Liu Alibaba Group, Dongxia Wang Zhejiang University, Zehong Hu Alibaba Group, Rong Zhang Alibaba Group, Hui Xue Alibaba Group
DOI
13:45
15m
Talk
DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization
Technical Papers
Simin Chen University of Texas at Dallas, Shiyi Wei University of Texas at Dallas, Cong Liu University of California at Riverside, Wei Yang University of Texas at Dallas
DOI
14:00
15m
Talk
Validating Multimedia Content Moderation Software via Semantic Fusion
Technical Papers
Wenxuan Wang Chinese University of Hong Kong, Jingyuan Huang Chinese University of Hong Kong, Chang Chen Chinese University of Hong Kong, Jiazhen Gu Chinese University of Hong Kong, Jianping Zhang Chinese University of Hong Kong, Weibin Wu Sun Yat-sen University, Pinjia He Chinese University of Hong Kong, Michael Lyu Chinese University of Hong Kong
DOI
14:15
15m
Talk
What You See Is What You Get? It Is Not the Case! Detecting Misleading Icons for Mobile Applications
Technical Papers
Linlin Li Southern University of Science and Technology, Ruifeng Wang Northeastern University, Xian Zhan Southern University of Science and Technology, Ying Wang Northeastern University, Cuiyun Gao Harbin Institute of Technology, Sinan Wang Southern University of Science and Technology, Yepang Liu Southern University of Science and Technology
DOI
14:30
15m
Talk
How Effective Are Neural Networks for Fixing Security Vulnerabilities
Technical Papers
Yi Wu Purdue University, Nan Jiang Purdue University, Hung Viet Pham University of Waterloo, Thibaud Lutellier University of Alberta, Jordan Davis Purdue University, Lin Tan Purdue University, Petr Babkin J.P. Morgan AI Research, Sameena Shah J.P. Morgan AI Research
DOI
14:45
15m
Talk
ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-Based Systems
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Jing Wu Monash University, John Grundy Monash University, Xiao Chen Monash University, Chunyang Chen Monash University, Li Li Beihang University
DOI