KEYWORDS: Magnetic resonance imaging, Breast, 3D scanning, Scanners, 3D image processing, Convolutional neural networks, 3D modeling, Computed tomography, Machine learning, Medical imaging
Categorization of radiological images according to characteristics such as modality, scanner parameters, body part etc, is important for quality control, clinical efficiency and research. The metadata associated with images stored in the DICOM format reliably captures scanner settings such as tube current in CT or echo time (TE) in MRI. Other parameters such as image orientation, body part examined and presence of intravenous contrast, however, are not inherent to the scanner settings, and therefore require user input which is prone to human error. There is a general need for automated approaches that will appropriately categorize images, even with parameters that are not inherent to the scanner settings. These approaches should be able to process both planar 2D images and full 3D scans. In this work, we present a deep learning based approach for automatically detecting one such parameter: the presence or absence of intravenous contrast in 3D MRI scans. Contrast is manually injected by radiology staff during the imaging examination, and its presence cannot be automatically recorded in the DICOM header by the scanner. Our classifier is a convolutional neural network (CNN) based on the ResNet architecture. Our data consisted of 1000 breast MRI scans (500 scans with and 500 scans without intravenous contrast), used for training and testing a CNN on 80%/20% split, respectively. The labels for the scans were obtained from the series descriptions created by certified radiological technologists. Preliminary results of our classifier are very promising with an area under the ROC curve (AUC) of 0.98, sensitivity and specificity of 1.0 and 0.9 respectively (at the optimal ROC cut-off point), demonstrating potential usefulness in both clinical as well as research settings.
Knee-related injuries, including meniscal tears, are common in young athletes and require accurate diagnosis and
appropriate surgical intervention. Although with proper technique and skill, confidence in the detection of meniscal
tears should be high, this task continues to be a challenge for many inexperienced radiologists. The purpose of our study
was to automate detection of meniscal tears of the knee using a computer-aided detection (CAD) algorithm. Automated
segmentation of the sagittal T1-weighted MR imaging sequences of the knee in 28 patients with diagnoses of meniscal
tears was performed using morphologic image processing in a 3-step process including cropping, thresholding, and
application of morphological constraints. After meniscal segmentation, abnormal linear meniscal signal was extracted
through a second thresholding process. The results of this process were validated by comparison with the interpretations
of 2 board-certified musculoskeletal radiologists. The automated meniscal extraction algorithm process was able to
successfully perform region of interest selection, thresholding, and object shape constraint tasks to produce a convex
image isolating the menisci in more than 69% of the 28 cases. A high correlation was also noted between the CAD
algorithm and human observer results in identification of complex meniscal tears. Our initial investigation indicates
considerable promise for automatic detection of simple and complex meniscal tears of the knee using the CAD
algorithm. This observation poses interesting possibilities for increasing radiologist productivity and confidence,
improving patient outcomes, and applying more sophisticated CAD algorithms to orthopedic imaging tasks.
Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing
volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of
their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the
Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this
reason, differences in measurements obtained by automated tools from various vendors may have significant
implications on management, yet the degree of variability in these measurements is not well understood. The goal of this
study is to quantify the differences in nodule maximum diameter and volume among different automated analysis
software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of
nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we
obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as
ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among
the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These
data suggest that when using automated commercial software, volume measurements may be a more reliable marker of
tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be
relatively reproducible among various commercial workstations, in contrast to the variability documented when
performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.
KEYWORDS: Digital photography, Photography, 3D image processing, 3D image reconstruction, Computed tomography, Diagnostics, Software, Statistical analysis, Radiology, Facial recognition systems
3D and multi-planar reconstruction of CT images have become indispensable in the routine practice of diagnostic
imaging. These tools cannot only enhance our ability to diagnose diseases, but can also assist in therapeutic planning as
well. The technology utilized to create these can also render surface reconstructions, which may have the undesired
potential of providing sufficient detail to allow recognition of facial features and consequently patient identity, leading
to violation of patient privacy rights as described in the HIPAA (Health Insurance Portability and Accountability Act)
legislation. The purpose of this study is to evaluate whether 3D reconstructed images of a patient's facial features can
indeed be used to reliably or confidently identify that specific patient. Surface reconstructed images of the study
participants were created used as candidates for matching with digital photographs of participants. Data analysis was
performed to determine the ability of observers to successfully match 3D surface reconstructed images of the face with
facial photographs. The amount of time required to perform the match was recorded as well. We also plan to
investigate the ability of digital masks or physical drapes to conceal patient identity. The recently expressed concerns
over the inability to truly "anonymize" CT (and MRI) studies of the head/face/brain are yet to be tested in a prospective
study. We believe that it is important to establish whether these reconstructed images are a "threat" to patient
privacy/security and if so, whether minimal interventions from a clinical perspective can substantially reduce this
possibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.