Intriguing Properties of Adversarial ML Attacks in the Problem Space
- Submitting institution
-
King's College London
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 126620616
- Type
- E - Conference contribution
- DOI
-
10.1109/SP40000.2020.00073
- Title of conference / published proceedings
- 2020 IEEE Symposium on Security and Privacy
- First page
- 1332
- Volume
- -
- Issue
- -
- ISSN
- 2375-1207
- Open access status
- Compliant
- Month of publication
- May
- Year of publication
- 2020
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
3
- Research group(s)
-
-
- Citation count
- 5
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Adversarial machine learning attacks have been studied in the context of images and audio, where input and feature spaces are closely related. This work is the first to propose a rigorous theory to reformulate the problem in the context of realizable attacks, where one is concerned with satisfying several constraints (e.g., semantics, plausibility). This shows existing defences are insufficient and require principled rethinking. The work led to research funding from AVAST, invited talks (eg CyberSec&AI 2020, CASA Distinguished Lectures, Oracle Labs, Zhejiang University), collaborations (eg AVAST, UIUC, WUSTL, Imperial and UniBW) and several institutions using our codebase (https://s2lab.kcl.ac.uk/projects/intriguing/).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -