A deep learning approach for generalized speech animation
- Submitting institution
-
The University of East Anglia
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 182621319
- Type
- D - Journal article
- DOI
-
10.1145/3072959.3073699
- Title of journal
- ACM Transactions on Graphics
- Article number
- 93
- First page
- -
- Volume
- 36
- Issue
- 4
- ISSN
- 0730-0301
- Open access status
- Compliant
- Month of publication
- July
- Year of publication
- 2017
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
7
- Research group(s)
-
-
- Citation count
- 50
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This paper presents a deep learning approach for automatically generating natural looking speech animation that synchronises to input speech. The approach runs in real-time, and can be integrated into existing animation pipelines. The work formed the basis of a successful EPSRC UKRI Innovation Fellowship application, titled ‘Dynamically Accurate Avatars’ (EP/S001816/1), and a subsequent EPSRC-funded PhD project titled ‘Automatic Character Animation’. It also led to a consultancy project with New Zealand-based company UneeQ, a growing global digital human platform with clients including Vodafone and BMW.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -