Understanding and Visualizing Deep Visual Saliency Models
- Submitting institution
-
University of Glasgow
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 11-11708
- Type
- E - Conference contribution
- DOI
-
10.1109/CVPR.2019.01045
- Title of conference / published proceedings
- 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- First page
- 10198
- Volume
- -
- Issue
- -
- ISSN
- 2575-7075
- Open access status
- Compliant
- Month of publication
- June
- Year of publication
- 2019
- URL
-
http://eprints.gla.ac.uk/218232/
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
4
- Research group(s)
-
-
- Citation count
- 0
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- ORIGINALITY: Deep learning-based approaches can confidently predict where people would look in images. This article analyses internal representations of deep saliency networks and demonstrates important differences between human and neural attention, and limits to how they generalise. SIGNIFICANCE: Attention is a key process for image analysis with limited resources. Understanding how humans and algorithms allocate attention is essential to improving computer vision performance. This article, published in the top Computer Vision conference (acceptance rate 25%), provides the first evidence on what deep saliency models learn and their limitations. RIGOUR: The analysis is supported by extensive experiments comparing state-of-the-art saliency models.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -