Multi-Modal Domain Adaptation for Fine-Grained Action Recognition
- Submitting institution
-
University of Bristol
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 250730250
- Type
- E - Conference contribution
- DOI
-
10.1109/CVPR42600.2020.00020
- Title of conference / published proceedings
- 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) : CVPR 2020
- First page
- 119
- Volume
- -
- Issue
- -
- ISSN
- 2575-7075
- Open access status
- Compliant
- Month of publication
- August
- Year of publication
- 2020
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
1
- Research group(s)
-
C - Visual Information Lab
- Citation count
- 0
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Accepted as Oral (top 5% of very competitive venue), paper utilises self-supervision within modalities (in this case appearance and motion) for unsupervised domain adaptation (UDA), utilising unlabelled videos in the new environment (public code: https://github.com/jonmun/MM-SADA-code). The paper forms the base for a new challenge open to the research community (https://competitions.codalab.org/competitions/26096). This is the first challenge of its kind for adapting to actions in unseen environments. As a result of this paper, a research collaboration agreement was signed with Naver Labs Europe including an internship for first author, where we are exploring the approach for action retrieval [ongoing work].
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -