Direct Speech Reconstruction from Articulatory Sensor Data by Machine Learning
- Submitting institution
-
The University of Hull
- Unit of assessment
- 12 - Engineering
- Output identifier
- 1398320
- Type
- D - Journal article
- DOI
-
10.1109/TASLP.2017.2757263
- Title of journal
- IEEE/ACM transactions on audio, speech, and language processing
- Article number
- -
- First page
- 2362
- Volume
- 25
- Issue
- 12
- ISSN
- 2329-9290
- Open access status
- Compliant
- Month of publication
- November
- Year of publication
- 2017
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
7
- Research group(s)
-
-
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This paper examines three different techniques for Direct Synthesis of speech from articulator movement, detected using Permanent Magnet Articulography. A method based on Recurrent Neural Networks provided world-leading performance, with a measured intelligibility of 92% using objective speech quality measures and listening tests. The results of this NIHR-funded research were incorporated into a wearable prototype real-time speech restoration device, developed for laryngectomy patient trials by the project’s industry partner, Practical Control Ltd (ed.holdsworth@practicalcontrol.com). This device is able to eliminate the effect of background magnetic fields using a method protected under patent (US10283120B2).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -