Probing the Need for Visual Context in Multimodal Machine Translation
- Submitting institution
-
The University of Sheffield
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 5233
- Type
- E - Conference contribution
- DOI
-
10.18653/v1/n19-1422
- Title of conference / published proceedings
- Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
- First page
- 4159
- Volume
- 1
- Issue
- -
- ISSN
- -
- Open access status
- -
- Month of publication
- June
- Year of publication
- 2019
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
3
- Research group(s)
-
D - Natural Language Processing
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This paper introduces a novel methodology for analysing how an Artificial Neural Network processes multiple modalities. It has influenced works about adversarial evaluation such as https://www.aclweb.org/anthology/D19-6406/. Received a best short paper award at the CORE A ranked NAACL conference. Work arose from an international collaboration between the ERC-funded MultiMT (L. Specia, Sheffield/Imperial) and EU CHISTERA-funded M2CR (L. Barrault, then at LeMans) projects.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -