Discovering Interpretable Representations for Both Deep Generative and Discriminative Models
- Submitting institution
-
University of Glasgow
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 11-09958
- Type
- E - Conference contribution
- DOI
-
-
- Title of conference / published proceedings
- 35th International Conference on Machine Learning, ICML 2018
- First page
- 50
- Volume
- -
- Issue
- -
- ISSN
- 2640-3498
- Open access status
- Compliant
- Month of publication
- July
- Year of publication
- 2018
- URL
-
http://eprints.gla.ac.uk/213474/
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
2
- Research group(s)
-
-
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- ORIGINALITY: Proposes two novel algorithms for interpretation of data representations, with human-in-the-loop interpretability priors using active learning to elicit side-information from human experts, optimising interpretable representations. SIGNIFICANCE: Presented at top deep-learning conference ICLR, these have become key algorithms, very widely-cited, clearly influencing subsequent top work. Interpretable lens (ILVM) are widely applicable, with a flexible, invertible transformation allowing data interpretation without model deterioration. Jointly Learned (JLVM) models trade off interpretability/data-reconstruction during learning, providing novel perspectives on compression/regularization relationship. RIGOUR: Models are mathematically derived using theoretical insights from invertible systems and information theory. Experimental state-of-the-art results achieved on three datasets.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -