Identifying Annotator Bias: A new IRT-based method for bias identification
- Submitting institution
-
The Open University
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 1676379
- Type
- E - Conference contribution
- DOI
-
10.18653/v1/2020.coling-main.421
- Title of conference / published proceedings
- Proceedings of The 28th International Conference on Computational Linguistics (COLING)
- First page
- 4787
- Volume
- -
- Issue
- -
- ISSN
- -
- Open access status
- -
- Month of publication
- December
- Year of publication
- 2020
- URL
-
https://www.aclweb.org/anthology/2020.coling-main.421/
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
-
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Contemporary AI is based largely on human annotation of data, making the quality of annotations pivotal to progress. The method demonstrated in this paper can be used to spot outlier annotators, improve annotation guidelines, and better assess annotation reliability. The method recontextualises Item Response Theory, which is used within surveys and education to identify the latent cognitive traits of respondents to tests and surveys. The paper introduces a conceptual twist by modelling the properties of the linguistic items (e.g. fluency) as a latent trait. It has the potential to improve evaluation reliability and hence annotation quality in NLP and beyond.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -