Inducing relational knowledge from BERT
- Submitting institution
-
Cardiff University / Prifysgol Caerdydd
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 104374860
- Type
- E - Conference contribution
- DOI
-
10.1609/aaai.v34i05.6242
- Title of conference / published proceedings
- Proceedings of the AAAI Conference on Artificial Intelligence
- First page
- 7456
- Volume
- 34
- Issue
- 5
- ISSN
- 2159-5399
- Open access status
- Compliant
- Month of publication
- April
- Year of publication
- 2020
- URL
-
https://doi.org/10.1609/aaai.v34i05.6242
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
A - Artificial intelligence and data analytics
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This article set a new line of research for understanding how language models (which are ubiquitous in NLP given their state-of-the-art results across many tasks) capture relational knowledge. Prior to this work, approaches aimed at extracting relational knowledge were based on manual templates (https://doi.org/10.18653/v1/D19-1250). We show that extracting knowledge from automatically-generated templates is not only possible but also leads to better results by a large margin. This was independently corroborated experimentation at other research labs (https://doi.org/10.1162/tacl_a_00324 and https://www.aclweb.org/anthology/2020.emnlp-main.346.pdf) and is highlighted in important surveys in this area (https://doi.org/10.1162/tacl_a_00349 and https://doi.org/10.1007/s11431-020-1647-3).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -