Analyzing the noise robustness of deep neural networks
- Submitting institution
-
Cardiff University / Prifysgol Caerdydd
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 101443845
- Type
- D - Journal article
- DOI
-
10.1109/TVCG.2020.2969185
- Title of journal
- IEEE Transactions on Visualization and Computer Graphics
- Article number
- -
- First page
- 0
- Volume
- 0
- Issue
- -
- ISSN
- 1077-2626
- Open access status
- Compliant
- Month of publication
- January
- Year of publication
- 2020
- URL
-
http://dx.doi.org/10.1109/TVCG.2020.2969185
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
5
- Research group(s)
-
V - Visual computing
- Citation count
- -
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This paper presents an interactive system to disclose how the prediction of adversarial examples diverges from correct prediction, helping to develop adversarial robust solutions crucial for safety/security-critical applications. A contribution analysis method was proposed to help experts retrace the deep neural network (DNN) layers to identify the root causes. In case studies, domain experts carried out in-depth analysis using the system and successfully identified the causes of wrong predictions. This work inspired further research into interactive deciphering of adversarial attacks on DNN (e.g., https://ieeexplore.ieee.org/document/9331279).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -