The deep learning (DL) compiler serves as a vital infrastructure component, enabling the deployment of deep neural networks on diverse hardware platforms such as mobile devices and Raspberry Pi.
Its primary function is to translate DNN programs written in high-level DL frameworks like PyTorch and TensorFlow into portable executables. These executables can then be flexibly executed by the deployed host programs. However, existing DL compilers rely on the traced mechanism, which involves feeding a runtime input to a neural network program and tracing its execution path to generate the computational graph necessary for compilation. Unfortunately, this mechanism falls short when dealing with modern dynamic neural networks (DyNNs) that possess varying computational graphs depending on the input provided. Consequently, conventional DL compilers struggle to accurately compile DyNNs into executable code. To address this limitation, we propose DyCL, a flexible approach that enables existing DL compilers to successfully compile DyNNs. DyCL tackles the dynamic nature of DyNNs by introducing a compilation mechanism that redistributes the control and data flow of the original programs during the compilation process. Specifically, DyCL applies program analysis and transformation techniques to convert a dynamic neural network into multiple sub-neural networks. Each sub-neural network is devoid of conditional statements and is compiled independently. Furthermore, DyCL synthesizes a host API that models the control flow of the DyNNs and facilitates the invocation of the sub-neural networks.
Our evaluation demonstrates the effectiveness of DyCL, achieving a 100% success rate in compiling all dynamic neural networks. Moreover, the compiled executables generated by DyCL exhibit significantly improved performance, running between $1.12\times$ and $20.21\times$ faster than the original DyNNs executed on general-purpose DL frameworks.

Wed 19 Jul

Displayed time zone: Pacific Time (US & Canada) change

13:30 - 15:00
ISSTA 7: Testing and Analysis of Machine-Learning SystemsTechnical Papers at Smith Classroom (Gates G10)
Chair(s): Vincenzo Riccio University of Udine
13:30
15m
Talk
FairRec: Fairness Testing for Deep Recommender Systems
Technical Papers
Huizhong Guo Zhejiang University, Jinfeng Li Alibaba Group, Jingyi Wang Zhejiang University, Xiangyu Liu Alibaba Group, Dongxia Wang Zhejiang University, Zehong Hu Alibaba Group, Rong Zhang Alibaba Group, Hui Xue Alibaba Group
DOI
13:45
15m
Talk
DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization
Technical Papers
Simin Chen University of Texas at Dallas, Shiyi Wei University of Texas at Dallas, Cong Liu University of California at Riverside, Wei Yang University of Texas at Dallas
DOI
14:00
15m
Talk
Validating Multimedia Content Moderation Software via Semantic Fusion
Technical Papers
Wenxuan Wang Chinese University of Hong Kong, Jingyuan Huang Chinese University of Hong Kong, Chang Chen Chinese University of Hong Kong, Jiazhen Gu Chinese University of Hong Kong, Jianping Zhang Chinese University of Hong Kong, Weibin Wu Sun Yat-sen University, Pinjia He Chinese University of Hong Kong, Michael Lyu Chinese University of Hong Kong
DOI
14:15
15m
Talk
What You See Is What You Get? It Is Not the Case! Detecting Misleading Icons for Mobile Applications
Technical Papers
Linlin Li Southern University of Science and Technology, Ruifeng Wang Northeastern University, Xian Zhan Southern University of Science and Technology, Ying Wang Northeastern University, Cuiyun Gao Harbin Institute of Technology, Sinan Wang Southern University of Science and Technology, Yepang Liu Southern University of Science and Technology
DOI
14:30
15m
Talk
How Effective Are Neural Networks for Fixing Security Vulnerabilities
Technical Papers
Yi Wu Purdue University, Nan Jiang Purdue University, Hung Viet Pham University of Waterloo, Thibaud Lutellier University of Alberta, Jordan Davis Purdue University, Lin Tan Purdue University, Petr Babkin J.P. Morgan AI Research, Sameena Shah J.P. Morgan AI Research
DOI
14:45
15m
Talk
ModelObfuscator: Obfuscating Model Information to Protect Deployed ML-Based Systems
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Jing Wu Monash University, John Grundy Monash University, Xiao Chen Monash University, Chunyang Chen Monash University, Li Li Beihang University
DOI