Deep-learning convolutional neural networks (DCNNs) are the most commonly used approach in medical image analysis tasks at present; however, they have largely been used as black-box predictors, lacking explanation for the underlying reasons. Explainable artificial intelligence (XAI) is an emerging subfield of AI seeking to understand how models make their decisions. In this work, we applied XAI visualization to gain an insight into the features learned by a DCNN trained to classify estrogen receptor status (ER+ vs ER-) based on dynamic contrast-enhanced magnetic resonance imaging (DCEMRI) of the breast. Our data set contained 1395 ER+ regions-of-interest (ROIs) and 729 ER- ROIs from 148 patients, each with a pre-contrast scan and a minimum of two post-contrast scans. We developed a novel transfer-trained dual-domain DCNN architecture derived from the AlexNet model trained on ImageNet data that received the spatial (across the volume) and dynamic (across the acquisition sequence) components of each DCE-MRI ROI as input. The network’s performance was evaluated with the area under the receiver operating characteristic curve (AUC) from leave-one-case-out crossvalidation. To visualize the DCNN learning, we applied XAI techniques, including the Integrated Gradients attribution method and the SmoothGrad noise reduction algorithm, to the ROIs from the training set. We observed that our DCNN learned relevant features from the spatial and dynamic domains, but there were differences in the contributing features from the two domains. We also visualized DCNN learning from irrelevant features resulting from pre-processing artifacts. These observations motivate new approaches to pre-processing our data and training our DCNN.
We propose an intensity-based technique to homogenize dynamic contrast-enhanced magnetic resonance imaging (DCEMRI) data acquired at six institutions. A total of 234 T1-weighted MRI volumes acquired at the peak kinetic curve were obtained for study of the homogenization and unsupervised deep-learning feature extraction techniques. The homogenization uses reference regions of adipose breast tissue since they are less susceptible to variations due to cancer and contrast medium. For the homogenization, the moments of the distribution of reference pixel intensities across the cases were matched and the remaining intensity distributions were matched accordingly. A deep stacked autoencoder with six convolutional layers was trained to reconstruct a 128×128 MRI slice and to extract a latent space of 1024 dimensions. We used the latent space from the stacked autoencoder to extract deep embedding features that represented the global and local structures of the imaging data. An analysis using spectral embedding of the latent space shows that, before homogenization the dominating factor was the dependency on the imaging center; after homogenization the histograms of the cases between different centers were matched and the center dependency was reduced. The results of feature analysis indicate that the proposed homogenization approach may lessen the effects of different imaging protocols and scanners in MRI, which may then allow more consistent quantitative analysis of radiomic information across patients and improve the generalizability of machine learning methods across different clinical sites. Further study is underway to evaluate the performance of machine learning models with and without image homogenization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.