KEYWORDS: Digital breast tomosynthesis, Computer aided diagnosis and therapy, Reconstruction algorithms, 3D image processing, Computer-aided diagnosis, Detection and tracking algorithms, 3D image reconstruction, 3D image enhancement, Breast, Digital mammography, Deep learning, Convolutional neural networks, Mammography, 3D displays, Image enhancement
In a typical 2D mammography workflow scenario, a computer-aided detection (CAD) algorithm is used as a second reader producing marks for a radiologist to review. In the case of 3D digital breast tomosynthesis (DBT), the display of CAD detections at multiple reconstruction heights would lead to an increased image browsing and interpretation time. We propose an alternative approach in which an algorithm automatically identifies suspicious regions of interest from 3D reconstructed DBT slices and then merges the findings with the corresponding 2D synthetic projection image which is then reviewed. The resultant enhanced synthetic 2D image combines the benefits of a familiar 2D breast view with superior appearance of suspicious locations from 3D slices. Moreover, clicking on 2D suspicious locations brings up the display of the corresponding 3D regions in a DBT volume allowing navigation between 2D and 3D images. We explored the use of these enhanced synthetic images in a concurrent read paradigm by conducting a study with 5 readers and 30 breast exams. We observed that the introduction of the enhanced synthetic view reduced radiologist's average interpretation time by 5.4%, increased sensitivity by 6.7% and increased specificity by 15.6%.
KEYWORDS: Digital breast tomosynthesis, Reconstruction algorithms, Computer aided diagnosis and therapy, Tissues, Computer-aided diagnosis, Neural networks, Mammography, Digital mammography, Detection and tracking algorithms, Evolutionary algorithms, Deep learning, Convolutional neural networks, Breast, Image segmentation, Medical imaging
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 ± 0:040 to 0:893 ± 0:033 for suspicious ROIs; and from 0:852 ± 0:065 to 0:930 ± 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
One widely accepted classification of a prostate is by a central gland (CG) and a peripheral zone (PZ). In some
clinical applications, separating CG and PZ from the whole prostate is useful. For instance, in prostate cancer
detection, radiologist wants to know in which zone the cancer occurs. Another application is for multiparametric
MR tissue characterization. In prostate T2 MR images, due to the high intensity variation between CG and PZ,
automated differentiation of CG and PZ is difficult. Previously, we developed an automated prostate boundary
segmentation system, which tested on large datasets and showed good performance. Using the results of the
pre-segmented prostate boundary, in this paper, we proposed an automated CG segmentation algorithm based
on Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces (LOGISMOS). The designed
LOGISMOS model contained both shape and topology information during deformation. We generated graph cost
by training classifiers and used coarse-to-fine search. The LOGISMOS framework guarantees optimal solution
regarding to cost and shape constraint. A five-fold cross-validation approach was applied to training dataset
containing 261 images to optimize the system performance and compare with a voxel classification based reference
approach. After the best parameter settings were found, the system was tested on a dataset containing another
261 images. The mean DSC of 0.81 for the test set indicates that our approach is promising for automated CG
segmentation. Running time for the system is about 15 seconds.
Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape.
Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation
therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and
context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful
patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this
paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is
segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by
a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach
is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The
refinement model is based on a graph-search based framework, which contains both shape and topology information
during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and
region-specific classifier training. The proposed algorithm was developed using 261 training images and tested
on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on
the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40
seconds depending on image size and resolution.
Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment:
it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for
MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate
reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of
MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform
multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired
with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the
prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized
gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and
evaluated. The components of the method, offline template learning and the localization algorithm, are described
in detail.
The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer
Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR
exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence
intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were
4.06 ± 0.33 mm and 3.10 ± 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm
provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results
demonstrate high utility of the detection method for a fully automated prostate segmentation.
KEYWORDS: Image segmentation, Cartilage, Bone, 3D image processing, Magnetic resonance imaging, Optical spheres, 3D modeling, Associative arrays, Computer engineering, Image processing algorithms and systems
A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The
method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased
image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of
multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee
that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction
space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all
vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric
field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject
multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in
60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance
as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning
errors, respectively).
KEYWORDS: Cartilage, Bone, Image segmentation, 3D image processing, Natural surfaces, 3D modeling, Tissues, Data modeling, Medical imaging, Image processing
We present a novel framework for the simultaneous segmentation of multiple interacting surfaces belonging
to multiple mutually interacting objects. The method is a non-trivial extension of our previously reported
optimal multi-surface segmentation. Considering an example application of knee-cartilage segmentation, the
framework consists of the following main steps: 1) Shape model construction: Building a mean shape
for each bone of the joint (femur, tibia, patella) from interactively segmented volumetric datasets. Using the
resulting mean-shape model - identification of cartilage, non-cartilage, and transition areas on the mean-shape
bone model surfaces. 2) Presegmentation: Employment of iterative optimal surface detection method to
achieve approximate segmentation of individual bone surfaces. 3) Cross-object surface mapping: Detection
of inter-bone equidistant separating sheets to help identify corresponding vertex pairs for all interacting surfaces.
4) Multi-object, multi-surface graph construction and final segmentation: Construction of a single
multi-bone, multi-surface graph so that two surfaces (bone and cartilage) with zero and non-zero intervening
distances can be detected for each bone of the joint, according to whether or not cartilage can be locally absent
or present on the bone. To define inter-object relationships, corresponding vertex pairs identified using the
separating sheets were interlinked in the graph. The graph optimization algorithm acted on the entire multiobject,
multi-surface graph to yield a globally optimal solution.
The segmentation framework was tested on 16 MR-DESS knee-joint datasets from the Osteoarthritis Initiative
database. The average signed surface positioning error for the 6 detected surfaces ranged from 0.00 to 0.12 mm.
When independently initialized, the signed reproducibility error of bone and cartilage segmentation ranged from
0.00 to 0.26 mm. The results showed that this framework provides robust, accurate, and reproducible segmentation
of the knee joint bone and cartilage surfaces of the femur, tibia, and patella. As a general segmentation
tool, the developed framework can be applied to a broad range of multi-object segmentation problems.
KEYWORDS: Image segmentation, 3D modeling, Quantitative analysis, Computed tomography, Medical imaging, Statistical modeling, 3D image processing, Medicine, Visualization, Surgery
An abdominal aortic aneurysm (AAA) is an area of a localized widening of the abdominal aorta, with a frequent
presence of thrombus. A ruptured aneurysm can cause death due to severe internal bleeding. AAA thrombus
segmentation and quantitative analysis are of paramount importance for diagnosis, risk assessment, and
determination of treatment options. Until now, only a small number of methods for thrombus segmentation
and analysis have been presented in the literature, either requiring substantial user interaction or exhibiting
insufficient performance. We report a novel method offering minimal user interaction and high accuracy. Our
thrombus segmentation method is composed of an initial automated luminal surface segmentation, followed by a
cost function-based optimal segmentation of the inner and outer surfaces of the aortic wall. The approach utilizes
the power and flexibility of the optimal triangle mesh-based 3-D graph search method, in which cost functions for
thrombus inner and outer surfaces are based on gradient magnitudes. Sometimes local failures caused by image
ambiguity occur, in which case several control points are used to guide the computer segmentation without the
need to trace borders manually. Our method was tested in 9 MDCT image datasets (951 image slices). With the
exception of a case in which the thrombus was highly eccentric, visually acceptable aortic lumen and thrombus
segmentation results were achieved. No user interaction was used in 3 out of 8 datasets, and 7.80 ± 2.71 mouse
clicks per case / 0.083 ± 0.035 mouse clicks per image slice were required in the remaining 5 datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.