Presentation + Paper
12 April 2021 Augmenting saliency maps with uncertainty
Supriyo Chakraborty, Prudhvi Gurram, Franck Le, Lance Kaplan, Richard Tomsett
Author Affiliations +
Abstract
Explanations are generated to accompany a model decision indicating features of the input data that were the most relevant towards the model decision. Explanations are important not only for understanding the decisions of deep neural network, which in spite of their their huge success in multiple domains operate largely as abstract black boxes, but also for other model classes such as gradient boosted decision trees. In this work, we propose methods, using both Bayesian and Non-Bayesian approaches to augment explanations with uncertainty scores. We believe that uncertainty augmented saliency maps can help in better calibration of the trust between human analyst and the machine learning models.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Supriyo Chakraborty, Prudhvi Gurram, Franck Le, Lance Kaplan, and Richard Tomsett "Augmenting saliency maps with uncertainty", Proc. SPIE 11746, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, 117461M (12 April 2021); https://doi.org/10.1117/12.2588026
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Back to Top