Thu 20 Jul 2023 10:30 - 10:45 at Amazon Auditorium (Gates G20) - ISSTA 9: Testing 2 Chair(s): Cristian Cadar

Quality assurance (QA) tools are receiving more and more attention and are widely used by developers. Given the wide range of solutions for QA technology, it is still a question of evaluating QA tools. Most existing research is limited in the following ways: \textit{(i)} They compare tools without considering scanning rules analysis. \textit{(ii)} They disagree on the effectiveness of tools due to the study methodology and benchmark dataset. \textit{(iii)} They do not separately analyze the role of the warnings. \textit{(iv)} There is no large-scale study on the analysis of time performance. To address these problems, in the paper, we systematically select 6 free or open-source tools for a comprehensive study from a list of 148 existing Java QA tools. To carry out a comprehensive study and evaluate tools in multi-level dimensions, we first mapped the scanning rules to the CWE and analyze the coverage and granularity of the scanning rules. Then we conducted an experiment on 5 benchmarks, including 1,425 bugs, to investigate the effectiveness of these tools. Furthermore, we took substantial effort to investigate the effectiveness of warnings by comparing the real labeled bugs with the warnings and investigating their role in bug detection. Finally, we assessed these tools' time performance on 1,049 projects. The useful findings based on our comprehensive study can help developers improve their tools and provide users with suggestions for selecting QA tools.

Thu 20 Jul

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
ISSTA 9: Testing 2Technical Papers at Amazon Auditorium (Gates G20)
Chair(s): Cristian Cadar Imperial College London
10:30
15m
Talk
A Comprehensive Study on Quality Assurance Tools for Java
Technical Papers
Han Liu East China Normal University, Sen Chen Tianjin University, Ruitao Feng UNSW, Chengwei Liu Nanyang Technological University, Kaixuan Li East China Normal University, Zhengzi Xu Nanyang Technological University, Liming Nie Nanyang Technological University, Yang Liu Nanyang Technological University, Yixiang Chen East China Normal University
DOI
10:45
15m
Talk
Transforming Test Suites into Croissants
Technical Papers
Yang Chen University of Illinois at Urbana-Champaign, Alperen Yildiz Sabanci University, Darko Marinov University of Illinois at Urbana-Champaign, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
DOI
11:00
15m
Talk
SlipCover: Near Zero-Overhead Code Coverage for Python
Technical Papers
Juan Altmayer Pizzorno University of Massachusetts Amherst, Emery D. Berger University of Massachusetts Amherst
DOI
11:15
15m
Talk
To Kill a Mutant: An Empirical Study of Mutation Testing Kills
Technical Papers
Hang Du University of California at Irvine, Vijay Krishna Palepu Microsoft, James Jones University of California at Irvine
DOI
11:30
15m
Talk
Systematically Producing Test Orders to Detect Order-Dependent Flaky Tests
Technical Papers
Chengpeng Li University of Texas at Austin, M. Mahdi Khosravi Middle East Technical University, Wing Lam George Mason University, August Shi University of Texas at Austin
DOI
11:45
15m
Talk
Extracting Inline Tests from Unit Tests
Technical Papers
Yu Liu University of Texas at Austin, Pengyu Nie University of Texas at Austin, Anna Guo University of Texas at Austin, Milos Gligoric University of Texas at Austin, Owolabi Legunsen Cornell University
DOI