Software defect prediction : do different classifiers find the same defects?
- Submitting institution
-
The University of Lancaster
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 250153406
- Type
- D - Journal article
- DOI
-
10.1007/s11219-016-9353-3
- Title of journal
- Software Quality Journal
- Article number
- -
- First page
- 525
- Volume
- 26
- Issue
- 2
- ISSN
- 0963-9314
- Open access status
- Compliant
- Month of publication
- February
- Year of publication
- 2017
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
H - Software Engineering
- Citation count
- 24
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- The paper empirically debunks the previously accepted myth that machine learning algorithms perform similarly in defect prediction. It is the first study to combine the use of both industry and open source data to demonstrate empirically the need to use ensembles of machine learning techniques for effective defect prediction. Working closely with Sky developers who provided industry data and validated the tools produced, this work has been adopted by numerous researchers, many of whom have developed specific ensemble techniques in response to the paper. Funded by EPSRC (EP/L011751/1 and a follow-on EP/S005730/1 £1M) aims to develop fixes to predicted defects.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -