Magnetic Resonance Imaging (MRI) is increasingly used to localize prostate cancer, but the subtle features of cancer vs. normal tissue renders the interpretation of MRI challenging. Computational approaches have been proposed to detect prostate cancer, yet variation in intensity distribution across different scanners, and even on the same scanner, poses significant challenges to image analysis via computational tools, such as deep learning. In this study, we developed a conditional generative adversarial network (GAN) to normalize intensity distributions on prostate MRI. We used three methods to evaluate our GAN-normalization. First, we qualitatively compared the intensity of GAN-normalized images to the intensity distributions of statistically normalized images. Second, we visually examined the GAN-normalized images to ensure the appearance of the prostate and other structures were preserved. Finally, we quantitatively evaluated the performance of deep learning holistically nested edge detection (HED) networks to identify prostate cancer on MRI when using raw, statistically normalized, and GAN-normalized images. We found the detection network trained on GAN-normalized images achieved similar accuracy and area under the curve (AUC) scores when compared to the detection networks trained on raw and statistically normalized images. Conditional GANs may hence be an effective tool for normalizing intensity distribution on MRI and can be utilized to train downstream deep learning tasks.
Deep learning models have the potential to improve prediction of the presence of invasive breast cancer on MR images. Here we present a transfer learning framework for classifying dynamic contrast-enhanced MR images in two classes: those that have invasive breast carcinoma and those that are noninvasive (including benign findings and indolent cancers). We build and train several models based on a pre-trained VGG16 network and found that fine-tuning the last convolutional block is the best strategy for our small data scenario. Our model was trained and evaluated using 81 female patients who had a pre-operative MRI followed by surgery. All lesions have ground truth labels from the surgical pathology reports. We used a bounding box to generate cropped images centered on the lesion and extract multiple slices per lesion. Our network achieved an AUC of 0.83±0.05, sensitivity 0.83±0.16 and specificity 0.71±0.11 in predicting the presence of invasive cancer in the breast. We compared our results with state-of-the-art methods and found that our model is more accurate in distinguishing invasive from noninvasive lesions. Finally, the visual inspection of the class activation maps allowed us to better understand the decision process of our deep learning classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.