Deep Logic Networks: Inserting and Extracting Knowledge from Deep Belief Networks
- Submitting institution
-
City, University of London
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 794
- Type
- D - Journal article
- DOI
-
10.1109/TNNLS.2016.2603784
- Title of journal
- IEEE Transactions on Neural Networks and Learning Systems
- Article number
- -
- First page
- 246
- Volume
- 29
- Issue
- 2
- ISSN
- 2162-237X
- Open access status
- Not compliant
- Month of publication
- November
- Year of publication
- 2016
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
1
- Research group(s)
-
-
- Citation count
- 34
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- Explainability — based on the better understanding of the representations learned by neural networks — is now accepted as fundamental to the large-scale deployment of trustworthy AI. This output is the first to show correspondence between symbolic knowledge and deep Belief networks, enabling the extraction of knowledge from trained deep networks, key to explainability. The work has influenced work proposing new forms of image processing and weakly-supervised learning (Zhang et al, IEEE TNNLS) and industry-led work on explainable AI (Townsend et al, Fujitsu Ltd).
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -