Principles of Explanatory Debugging to personalize interactive machine learning
- Submitting institution
-
City, University of London
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 755
- Type
- E - Conference contribution
- DOI
-
10.1145/2678025.2701399
- Title of conference / published proceedings
- IUI '15: Proceedings of the 20th International Conference on Intelligent User Interfaces
- First page
- 126
- Volume
- -
- Issue
- -
- ISSN
- -
- Open access status
- -
- Month of publication
- March
- Year of publication
- 2015
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
3
- Research group(s)
-
-
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This work is significant because it is the first to show how explanations can be designed and implemented to improve accuracy in interactive machine learning. The output extends work from Stumpf et al. IUI 2007 and Kulesza et al. CHI 2012. It was presented at IUI’15 (acceptance rate 23%), and has been instrumental in shaping the DARPA call on Explainable AI (XAI) DARPA-BAA-16-53 on 16 August 2016 (contact David Gunning david.gunning@darpa.mil).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -