DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices
- Submitting institution
-
University of Cambridge
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 8943
- Type
- E - Conference contribution
- DOI
-
10.1109/IPSN.2016.7460664
- Title of conference / published proceedings
- 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks, IPSN 2016 - Proceedings
- First page
- 1
- Volume
- -
- Issue
- -
- ISSN
- -
- Open access status
- -
- Month of publication
- April
- Year of publication
- 2016
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
6
- Research group(s)
-
-
- Citation count
- 0
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This is the first academic paper to demonstrate techniques that made it feasibility to execute compressed deep neural networks on commodity DSPs already present in chipsets already available in consumer devices like smartphones, with minimal loss in accuracy. This line of work argued for the role of DSPs as a key part of the solution for on-device execution of deep learning as well as the value as heterogenous compute in general. Four years later, the architectural solution of DSP deployment, has now become incorporated into commercial deep learning frameworks like Googles TensorFlow through partnerships with hardware vendors like Qualcomm.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -