Convergence of sparse variational inference in Gaussian processes regression
- Submitting institution
-
Imperial College of Science, Technology and Medicine
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 4973
- Type
- D - Journal article
- DOI
-
-
- Title of journal
- Journal of Machine Learning Research
- Article number
- 131
- First page
- 1
- Volume
- 21
- Issue
- -
- ISSN
- 1532-4435
- Open access status
- Compliant
- Month of publication
- July
- Year of publication
- 2020
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
-
- Citation count
- 1
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This work proves the computational complexity for an arbitrarily good Gaussian process approximation method and shows that it is much lower than exact methods. The trade-off between computational scaling and approximation quality was not previously known. It provides theoretical proof that the search for constant factor speed-ups (e.g. http://proceedings.mlr.press/v118/wilk20a/wilk20a.pdf) will contribute to scaling Gaussian processes to big data problems, without asymptotic complexity becoming a limiting factor. This is an extended version of an ICML’19 paper (http://proceedings.mlr.press/v97/burt19a.html) awarded Best Paper (2 awards/3424 submissions; https://venturebeat.com/2019/06/14/ai-weekly-icml-2019-top-papers-and-highlights). The work follows-on from a highly-cited NIPS'16 paper (http://papers.nips.cc/paper/6477-understanding-probabilistic-sparse-gaussian-process-approximations.pdf).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -