Learning a deep model for human action recognition from novel viewpoints
- Submitting institution
-
The University of Lancaster
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 249320667
- Type
- D - Journal article
- DOI
-
10.1109/TPAMI.2017.2691768
- Title of journal
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- Article number
- -
- First page
- 667
- Volume
- 40
- Issue
- 3
- ISSN
- 0162-8828
- Open access status
- Deposit exception
- Month of publication
- March
- Year of publication
- 2018
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
B - Data Science
- Citation count
- 80
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This paper presents the first framework for synthetically generating an unlimited number of photorealistic human action videos by fitting 3D human shapes to realistic skeleton data while varying camera viewpoints. This is significant because previous models on view-invariant activity recognition were trained on a limited number of action videos due to unavailability of large-scale training datasets, severely restricting their potential, whereas this approach produces an unlimited number of training videos from an infinite number of viewpoints, providing the opportunity for training data-hungry deep models; experimental results reveal significant performance benefits. The work was funded by an ARC-Discovery-Grant in Australia (AU$293K).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -