Exploiting Unintended Feature Leakage in Collaborative Learning
- Submitting institution
-
University College London
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 14542
- Type
- E - Conference contribution
- DOI
-
10.1109/SP.2019.00029
- Title of conference / published proceedings
- 2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019)
- First page
- 691
- Volume
- 2019-May
- Issue
- -
- ISSN
- 1081-6011
- Open access status
- Technical exception
- Month of publication
- September
- Year of publication
- 2019
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
3
- Research group(s)
-
-
- Citation count
- 29
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Federated learning is a privacy-friendly approach that allows multiple participants to build a joint model by training locally and periodically exchanging model updates, thus sensitive data does not leave the user’s device. This paper, however, demonstrates that the updates actually leak unintended information about participants’ training data, even if uncorrelated with the main task. The implications of this work are immediate as federated learning is deployed in the wild by Google in Android for, e.g., predictive keyboard.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -