Improving the robustness of neural networks using K-support norm based adversarial training
- Submitting institution
-
University of Sussex
- Unit of assessment
- 12 - Engineering
- Output identifier
- 9832_66081
- Type
- D - Journal article
- DOI
-
10.1109/ACCESS.2016.2643678
- Title of journal
- IEEE Access
- Article number
- -
- First page
- 9501
- Volume
- 4
- Issue
- 2016
- ISSN
- 2169-3536
- Open access status
- Compliant
- Month of publication
- December
- Year of publication
- 2016
- URL
-
http://dx.doi.org/10.1109/ACCESS.2016.2643678
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
6
- Research group(s)
-
-
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Deep learning in multi-layer neural networks has shown significant enhancement in pattern recognition performance over previous methods. This paper addresses the important problem in these networks of robustness to small but significant perturbations of the training dataset, known as adversarial noise. A novel training approach is presented in this paper to address this problem based on a K-support norm noise model. Thorough testing on two benchmark datasets is presented to show the novel training method, in comparison to existing training methods, results in significantly enhanced robustness of the neural network when in the presence of adversarial noise. User Pakistan-MOD
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -