Multi-task Learning by Maximizing Statistical Dependence
- Submitting institution
-
The University of Bath
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 188557467
- Type
- E - Conference contribution
- DOI
-
10.1109/CVPR.2018.00365
- Title of conference / published proceedings
- 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- First page
- 3465
- Volume
- -
- Issue
- -
- ISSN
- 2575-7075
- Open access status
- Compliant
- Month of publication
- December
- Year of publication
- 2018
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
-
- Citation count
- 2
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Multi-task and transfer learning enables transferring knowledge gain from solving one machine learning problem to help solve others. Existing efforts are limited in that all learners must have the same form, e.g., all learners are neural networks or support vector machines while the best forms may vary across problems. This work broke this limitation by enabling comparisons of learners across machine learning solvers, e.g., combining neural networks with Gaussian process predictors. This work led to other fundamental contributions, including the very first model-agnostic transfer learning approach by Kim et al. at ECCV 2020.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -