We investigated the additive role of breast parenchyma stroma in the computer-aided diagnosis (CADx) of tumors on full-field digital mammograms (FFDM) by combining images of the tumor and contralateral normal parenchyma information via deep learning. The study included 182 breast lesions in which 106 were malignant and 76 were benign. All FFDM images were acquired using a GE 2000D Senographe system and retrospectively collected under an Institution Review Board (IRB) approved, Health Insurance Portability and Accountability Act (HIPAA) compliant protocol. Convolutional neutral networks (CNNs) with transfer learning were used to extract image-based characteristics of lesions and of parenchymal patterns (on the contralateral breast) directly from the FFDM images. Classification performance was evaluated and compared between analysis of only tumors and that of combined tumor and parenchymal patterns in the task of distinguishing between malignant and benign cases with the area under the Receiver Operating Characteristic (ROC) curve (AUC) used as the figure of merit. Using only lesion image data, the transfer learning method yielded an AUC value of 0.871 (SE=0.025) and using combined information from both lesion and parenchyma analyses, an AUC value of 0.911 (SE=0.021) was observed. This improvement was statistically significant (p-value=0.0362). Thus, we conclude that using CNNs with transfer learning to combine extracted image information of both tumor and parenchyma may improve breast cancer diagnosis.
Proc. SPIE. 10575, Medical Imaging 2018: Computer-Aided Diagnosis
KEYWORDS: Breast, Computer aided diagnosis and therapy, Breast cancer, Convolutional neural networks, Databases, Feature extraction, Image classification, Digital mammography, Digital breast tomosynthesis
With growing adoption of digital breast tomosynthesis (DBT) in breast cancer screening protocols, it is important to compare the performance of computer-aided diagnosis (CAD) in the diagnosis of breast lesions on DBT images compared to conventional full-field digital mammography (FFDM). In this study, we retrospectively collected FFDM and DBT images of 78 lesions from 76 patients, each containing lesions that were biopsy-proven as either malignant or benign. A square region of interest (ROI) was placed to fully cover the lesion on each FFDM, DBT synthesized 2D images, and DBT key slice images in the cranial-caudal (CC) and mediolateral-oblique (MLO) views. Features were extracted on each ROI using a pre-trained convolutional neural network (CNN). These features were then input to a support vector machine (SVM) classifier, and area under the ROC curve (AUC) was used as the figure of merit. We found that in both the CC view and MLO view, the synthesized 2D image performed best (AUC = 0.814, AUC = 0.881 respectively) in the task of lesion characterization. Small database size was a key limitation in this study, and could lead to overfitting in the application of the SVM classifier. In future work, we plan to expand this dataset and to explore more robust deep learning methodology such as fine-tuning.