More and more edge devices and mobile apps are leveraging deep learning (DL) capabilities. Deploying such models on devices – referred to as on-device models – rather than as remote cloud-hosted services, has gained popularity because it avoids transmitting user's data off of the device and achieves high response time. However, on-device models can be easily attacked, as they can be accessed by unpacking corresponding apps and the model is fully exposed to attackers. Recent studies show that attackers can easily generate white-box-like attacks for an on-device model or even inverse its training data. To protect on-device models from white-box attacks, we propose a novel technique called model obfuscation. Specifically, model obfuscation hides and obfuscates the key information – structure, parameters and attributes – of models by renaming, parameter encapsulation, neural structure obfuscation, shortcut injection, and extra layer injection. We have developed a prototype tool ModelObfuscator to automatically obfuscate on-device TFLite models. Our experiments show that this proposed approach can dramatically improve model security by significantly increasing the difficulty of parsing models' inner information, without increasing the latency of DL models. Our proposed on-device model obfuscation has the potential to be a fundamental technique for on-device model deployment. Our prototype tool is publicly available at \url{https://github.com/zhoumingyi/ModelObfuscator}.

Wed 19 Jul

Displayed time zone: Pacific Time (US & Canada) change

13:30 - 15:00
ISSTA 7: Testing and Analysis of Machine-Learning SystemsTechnical Papers at Smith Classroom (Gates G10)
Chair(s): Vincenzo Riccio University of Udine
13:30
15m
Talk
FairRec: Fairness Testing for Deep Recommender Systems
Technical Papers
Huizhong Guo Zhejiang University, Jinfeng Li Alibaba Group, Jingyi Wang Zhejiang University, Xiangyu Liu Alibaba Group, Dongxia Wang Zhejiang University, Zehong Hu Alibaba Group, Rong Zhang Alibaba Group, Hui Xue Alibaba Group
DOI
13:45
15m
Talk
DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization
Technical Papers
Simin Chen University of Texas at Dallas, Shiyi Wei University of Texas at Dallas, Cong Liu University of California at Riverside, Wei Yang University of Texas at Dallas
DOI
14:00
15m
Talk
Validating Multimedia Content Moderation Software via Semantic Fusion
Technical Papers
Wenxuan Wang Chinese University of Hong Kong, Jingyuan Huang Chinese University of Hong Kong, Chang Chen Chinese University of Hong Kong, Jiazhen Gu Chinese University of Hong Kong, Jianping Zhang Chinese University of Hong Kong, Weibin Wu Sun Yat-sen University, Pinjia He Chinese University of Hong Kong, Michael Lyu Chinese University of Hong Kong
DOI
14:15
15m
Talk
What You See Is What You Get? It Is Not the Case! Detecting Misleading Icons for Mobile Applications
Technical Papers
Linlin Li Southern University of Science and Technology, Ruifeng Wang Northeastern University, Xian Zhan Southern University of Science and Technology, Ying Wang Northeastern University, Cuiyun Gao Harbin Institute of Technology, Sinan Wang Southern University of Science and Technology, Yepang Liu Southern University of Science and Technology
DOI
14:30
15m
Talk
How Effective Are Neural Networks for Fixing Security Vulnerabilities
Technical Papers
Yi Wu Purdue University, Nan Jiang Purdue University, Hung Viet Pham University of Waterloo, Thibaud Lutellier University of Alberta, Jordan Davis Purdue University, Lin Tan Purdue University, Petr Babkin J.P. Morgan AI Research, Sameena Shah J.P. Morgan AI Research
DOI
14:45
15m
Talk
ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-Based Systems
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Jing Wu Monash University, John Grundy Monash University, Xiao Chen Monash University, Chunyang Chen Monash University, Li Li Beihang University
DOI