Is Feature Selection Secure against Training Data Poisoning?
- Submitting institution
-
The University of Manchester
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 40100497
- Type
- E - Conference contribution
- DOI
-
-
- Title of conference / published proceedings
- Proceedings of the 32nd International Conference on Machine Learning
- First page
- 1689
- Volume
- -
- Issue
- -
- ISSN
- -
- Open access status
- -
- Month of publication
- July
- Year of publication
- 2015
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
5
- Research group(s)
-
A - Computer Science
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- "Gartner’s Top 10 Strategic Technology Trends (2020), predicts that ""through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.""
This is the first demonstration that ""feature selection"" algorithms - a ubiquitous step in data science pre-processing - are highly vulnerable to these ""poisoning"" attacks (i.e. data that is designed with malicious intent to distort the learnt statistics).
Keynote at CASA (Cluster of Excellence for Cyber Security) Distinguished Lecture Series (June 2020).
ICML acceptance rate was 26% (270/1037)."
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -