End-to-end multimodal emotion recognition using deep neural networks
- Submitting institution
-
Imperial College of Science, Technology and Medicine
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 2252
- Type
- D - Journal article
- DOI
-
10.1109/JSTSP.2017.2764438
- Title of journal
- IEEE Journal of Selected Topics in Signal Processing
- Article number
- 8
- First page
- 1301
- Volume
- 11
- Issue
- 8
- ISSN
- 1932-4553
- Open access status
- Compliant
- Month of publication
- October
- Year of publication
- 2017
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
4
- Research group(s)
-
-
- Citation count
- 95
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- The paper explains end-to-end learning from audiovisual material, and the associated open-source end2you toolkit (https://github.com/end2you/end2you) outperformed other AVEC 2016 entries. The toolkit is used in research competitive challenges as a baseline to compare against (e.g. ComParE; http://compare.openaudio.eu). This work is an extension of our highly-cited ICASSP 2016 paper (https://doi.org/10.1109/ICASSP.2016.7472669), awarded an IEEE Spoken Language Processing Student Travel Grant. The approach is currently being used by RealEyes (https://www.realeyesit.com/), audEERING (https://www.audeering.com/) and other companies.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -