Exploiting multi-CNN features in CNN-RNN based Dimensional Emotion Recognition on the OMG in-the-wild Dataset
- Submitting institution
-
University of Greenwich
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 29426
- Type
- D - Journal article
- DOI
-
10.1109/TAFFC.2020.3014171
- Title of journal
- IEEE Transactions on Affective Computing
- Article number
- -
- First page
- 1
- Volume
- 0
- Issue
- UNSPECIFIED
- ISSN
- 1949-3045
- Open access status
- Compliant
- Month of publication
- -
- Year of publication
- 2020
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
1
- Research group(s)
-
-
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Extracting latent variables from trained CNNs and analysing their temporal evolution through multiple RNN structures has been of major significance for achieving affect recognition over in-the-wild audiovisual datasets. Our CNN-multiple RNN methodology achieved state-of-the-art performance for video analysis and dimensional emotion recognition. A part of it was submitted to the 'OMG-Emotion Challenge', a competition held jointly with the Special Session on Neural Models for Behaviour Recognition at WCCI/IJCNN 2018. Our approach ranked second among the techniques which used only visual information for valence estimation; it ranked third overall, even outperforming methods that used more modalities (audio, visual, text).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -