Disentangling Disentanglement in Variational Autoencoders
- Submitting institution
-
University of Edinburgh
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 160698148
- Type
- E - Conference contribution
- DOI
-
-
- Title of conference / published proceedings
- Proceedings of the 36th International Conference on Machine Learning
- First page
- 4402
- Volume
- 97
- Issue
- -
- ISSN
- 2640-3498
- Open access status
- Technical exception
- Month of publication
- June
- Year of publication
- 2019
- URL
-
-
- Supplementary information
-
http://proceedings.mlr.press/v97/mathieu19a.html
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
3
- Research group(s)
-
B - Data Science and Artificial Intelligence
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- One of the most important aspects of learning unsupervised, auto-encoded transformations of data, is to understand the information-theoretic principles that affect the quality of the learned representations. This work conducts both a theoretical and empirical analysis to show that common assumptions about the independence (or disentanglement) of learned representations can be misguided, and proposes an alternate view of the issue that can lead to an improved learning objective. Using this objective, one can target a wide variety of constraints on learned representations --- beyond just independence, such as sparsity and clustering --- in a principled and effective manner.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -