Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges (T)
- Submitting institution
-
The University of Leicester
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 1426
- Type
- E - Conference contribution
- DOI
-
10.1109/ASE.2015.86
- Title of conference / published proceedings
- 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, Lincoln, NE, USA, November 9-13, 2015
- First page
- 201
- Volume
- -
- Issue
- -
- ISSN
- -
- Open access status
- -
- Month of publication
- November
- Year of publication
- 2015
- URL
-
-
- Supplementary information
-
https://doi.org/10.1109/ASE.2015.86
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
5
- Research group(s)
-
-
- Citation count
- 59
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This widely-cited paper received an ACM SIGSOFT Distinguished Paper Award at ASE’15 (Automated Software Engineering Conference) and constitutes a milestone in automated test generation. It delivers conclusive quantitative evidence of the limited fault-effectiveness of state-of-the art unit test generation tools. It also contributes comprehensive qualitative insights on their strengths and weaknesses. These insights have prompted recent advances in automated test generation (e.g. oracle synthesis to overcome low fault-effectiveness (Goffi et al., ISSTA’16)), relation between program complexity and fault detection (Gay et al., ACM TOSEM 2016) and automated program repair based on generated tests (Martinez et al., J.EMSE 2020)).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -