Concurrency testing using controlled schedulers: an empirical study
- Submitting institution
-
Imperial College of Science, Technology and Medicine
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 2188
- Type
- D - Journal article
- DOI
-
10.1145/2858651
- Title of journal
- ACM Transactions on Parallel Computing
- Article number
- 23
- First page
- -
- Volume
- 2
- Issue
- 4
- ISSN
- 2329-4949
- Open access status
- Out of scope for open access requirements
- Month of publication
- February
- Year of publication
- 2016
- URL
-
-
- Supplementary information
-
10.1145/2858651
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
-
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- An independent evaluation of controlled scheduling methods for concurrency bug-finding. Demonstrated that a simple randomised baseline scheduler, ignored in most prior works, is often superior to more sophisticated schedulers at bug-finding. Led to collaboration with Microsoft on bug-finding via randomised scheduling in the P# system (PLDI'15, https://dl.acm.org/citation.cfm?id=2737996), now deployed to find bugs in Azure Cloud Services (Microsoft, contact: FoEREF@ic.ac.uk). Benchmarks open-sourced as SCTBench (https://github.com/mc-imperial/sctbench), used for evaluation in several follow-on works. Paper is an extended version of PPoPP’14 (acceptance rate: 16%); Best Student Paper award (https://dl.acm.org/citation.cfm?id=2555260), invited for special issue of ACM Trans. Parallel Computing (6/28 selected).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -