A fused deep learning architecture for viewpoint classification of echocardiography
- Submitting institution
-
Middlesex University
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 701
- Type
- D - Journal article
- DOI
-
10.1016/j.inffus.2016.11.007
- Title of journal
- Information Fusion
- Article number
- -
- First page
- 103
- Volume
- 36
- Issue
- -
- ISSN
- 1566-2535
- Open access status
- Not compliant
- Month of publication
- November
- Year of publication
- 2016
- URL
-
http://eprints.mdx.ac.uk/20927/
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
3
- Research group(s)
-
-
- Citation count
- 44
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- While ultrasound (US) remains the first and essential tool to detect heart diseases, its low resolution and overlapping video frames from different viewpoints (n=8) have considerably reduced diagnostic accuracy. This paper proposes a novel approach to classify viewpoints by embracing hand crafted features into a deep learning neutral network architecture. This is significant because the spatial and temporal information sustained by the moving heart can be incorporated, and clinicians’ insight on diagnosis is embedded, leading to a transparent, robust and performant decision-making support system. This work forms part of the WIDTH project funded by EU FP7.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -