Semantics derived automatically from language corpora contain human-like biases
- Submitting institution
-
The University of Bath
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 147668469
- Type
- D - Journal article
- DOI
-
10.1126/science.aal4230
- Title of journal
- Science
- Article number
- -
- First page
- 183
- Volume
- 356
- Issue
- 6334
- ISSN
- 0036-8075
- Open access status
- Compliant
- Month of publication
- April
- Year of publication
- 2017
- URL
-
-
- Supplementary information
-
http://www.sciencemag.org/content/356/6334/183/suppl/DC1
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
-
- Citation count
- 319
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This paper demonstrates that AI based on learning from cultural artefacts shares the biases of our culture, and that AI can therefore be used as a tool to track human biases. Several citing papers have now done this by analysing historic texts. This paper itself demonstrates that human and AI biases match real lived experience, giving important insight to the nature of bias and how prejudices can be propagated. Companies such as Google now check and correct their tools based on the methods and examples of this paper, cf. https://tinyurl.com/y4wd3y69
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -