PurposeTo compare the performance of four deep active learning (DAL) approaches to optimize label efficiency for training diabetic retinopathy (DR) classification deep learning models.Approach88,702 color retinal fundus photographs from 44,351 patients with DR grades from the publicly available EyePACS dataset were used. Four DAL approaches [entropy sampling (ES), Bayesian active learning by disagreement (BALD), core set, and adversarial active learning (ADV)] were compared to conventional naive random sampling. Models were compared at various dataset sizes using Cohen’s kappa (CK) and area under the receiver operating characteristic curve on an internal test set of 10,000 images. An independent test set of 3662 fundus photographs was used to assess generalizability.ResultsOn the internal test set, 3 out of 4 DAL methods resulted in statistically significant performance improvements (p < 1 × 10 − 4) compared to random sampling for multiclass classification, with the largest observed differences in CK ranging from 0.051 for BALD to 0.053 for ES. Improvements in multiclass classification generalized to the independent test set, with the largest differences in CK ranging from 0.126 to 0.135. However, no statistically significant improvements were seen for binary classification. Similar performance was seen across DAL methods, except ADV, which performed similarly to random sampling.ConclusionsUncertainty-based and feature descriptor-based deep active learning methods outperformed random sampling on both the internal and independent test sets at multiclass classification. However, binary classification performance remained similar across random sampling and active learning methods.
Automated segmentation of medical imaging is of broad interest to clinicians and machine learning researchers alike. The goal of segmentation is to increase efficiency and simplicity of visualization and quantification of regions of interest within a medical image. Image segmentation is a difficult task because of multiparametric heterogeneity within the images, an obstacle that has proven especially challenging in efforts to automate the segmentation of brain lesions from non-contrast head computed tomography (CT). In this research, we have experimented with multiple available deep learning architectures to segment different phenotypes of hemorrhagic lesions found after moderate to severe traumatic brain injury (TBI). These include: intraparenchymal hemorrhage (IPH), subdural hematoma (SDH), epidural hematoma (EDH), and traumatic contusions. We were able to achieve an optimal Dice Coefficient1 score of 0.94 using UNet++ 2D Architecture with Focal Tversky Loss Function, an increase from 0.85 using UNet 2D with Binary Cross-Entropy Loss Function in intraparenchymal hemorrhage (IPH) cases. Furthermore, using the same setting, we were able to achieve the Dice Coefficient score of 0.90 and 0.86 in cases of Extra-Axial bleeds and Traumatic contusions, respectively.
KEYWORDS: Magnetic resonance imaging, Breast, 3D scanning, Scanners, 3D image processing, Convolutional neural networks, 3D modeling, Computed tomography, Machine learning, Medical imaging
Categorization of radiological images according to characteristics such as modality, scanner parameters, body part etc, is important for quality control, clinical efficiency and research. The metadata associated with images stored in the DICOM format reliably captures scanner settings such as tube current in CT or echo time (TE) in MRI. Other parameters such as image orientation, body part examined and presence of intravenous contrast, however, are not inherent to the scanner settings, and therefore require user input which is prone to human error. There is a general need for automated approaches that will appropriately categorize images, even with parameters that are not inherent to the scanner settings. These approaches should be able to process both planar 2D images and full 3D scans. In this work, we present a deep learning based approach for automatically detecting one such parameter: the presence or absence of intravenous contrast in 3D MRI scans. Contrast is manually injected by radiology staff during the imaging examination, and its presence cannot be automatically recorded in the DICOM header by the scanner. Our classifier is a convolutional neural network (CNN) based on the ResNet architecture. Our data consisted of 1000 breast MRI scans (500 scans with and 500 scans without intravenous contrast), used for training and testing a CNN on 80%/20% split, respectively. The labels for the scans were obtained from the series descriptions created by certified radiological technologists. Preliminary results of our classifier are very promising with an area under the ROC curve (AUC) of 0.98, sensitivity and specificity of 1.0 and 0.9 respectively (at the optimal ROC cut-off point), demonstrating potential usefulness in both clinical as well as research settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.