You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
10 March 2020Visualization approach to assess the robustness of neural networks for medical image classification
Deep learning methods have shown a high performance potential for medical image analysis [1], particularly classification for computer-aided diagnosis. However, explaining their decisions is not trivial and could be helpful to achieve better results and know how far they can be trusted. Many methods have been developed in order to explain the decisions of classifiers [2]–[7], but their outputs are not always meaningful and remain difficult to interpret. In this paper, we adapted the method of [8] to 3D medical images to find on which basis a network classifies quantitative data. Indeed, quantitative data can be obtained from different medical imaging modalities, for example binding potential maps obtained with positron emission tomography (PET) or gray matter (GM) probability maps extracted from structural magnetic resonance imaging (MRI). Our application focuses on the detection of Alzheimer’s disease (AD), a neurodegenerative syndrome that induces GM atrophy. We used as inputs GM probability maps, a proxy for atrophy, extracted from T1-weighted (T1w) MRI. The process includes two distinct parts: first a convolutional neural network (CNN) is trained to classify AD from control subjects, then the weights of the network are fixed and a mask is trained to prevent the network from classifying correctly all the subjects it has correctly classified after training. The goals of this work are to assess whether the visualization method initially developed for natural images is suitable for 3D medical images and to exploit it to better understand the decisions taken by classification networks. This work is an original work and has not been submitted elsewhere.
The alert did not successfully save. Please try again later.
Elina Thibeau-Sutre, Olivier Colliot, Didier Dormont, Ninon Burgos, "Visualization approach to assess the robustness of neural networks for medical image classification," Proc. SPIE 11313, Medical Imaging 2020: Image Processing, 113131J (10 March 2020); https://doi.org/10.1117/12.2548952