Presentation
3 April 2024 Interpretable deep learning in medical imaging
Author Affiliations +
Abstract
We would like deep learning systems to aid radiologists with difficult decisions instead of replacing them with inscrutable black boxes. "Explaining" the black boxes with XAI tools is problematic, particularly in medical imaging where the explanations from XAI tools are inconsistent and unreliable. Instead of explaining the black boxes, we can replace them with interpretable deep learning models that explain their reasoning processes in ways that people can understand. One popular interpretable deep learning approach uses case-based reasoning, where an algorithm compares a new test case to similar cases from the past ("this looks like that"), and a decision is made based on the comparisons. Radiologists often use this kind of reasoning process themselves when evaluating a new challenging test case. In this talk, I will demonstrate interpretable machine learning techniques through applications to mammography and EEG analysis.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Cynthia Rudin "Interpretable deep learning in medical imaging", Proc. SPIE 12927, Medical Imaging 2024: Computer-Aided Diagnosis, 1292702 (3 April 2024); https://doi.org/10.1117/12.3016192
Advertisement
Advertisement
Back to Top