Breast cancer risk assessment relies on accurate classification of breast density, which is a key component of the ACR breast cancer screening recommendations for clinical decisions. The 5th edition of the BIRADS standard divides breast density into four categories, ranging from almost entirely fatty to extremely dense. High breast density (classes C and D) reduces the sensitivity of mammography, since the dense fibroglandular tissue can hide lesions, masses and other findings. Therefore, although the benefit of supplementary imaging in such cases has not been conclusively demonstrated, the ACR guidelines suggest additional screening for patients with high breast density. This creates an important treatment decision boundary between class B (scattered areas of fibroglandular density) and class C (heterogeneously dense). Unfortunately, the slightly abstract, qualitative nature of the class descriptions leads to significant inter- and intra-rater variation in breast density assessment. This is exacerbated by updates to the BIRADS standard that can cause recent breast density assessments to be incompatible with prior assessments for the same patient. Additionally, images from similar patients can vary significantly when taken with different devices or at sites with different acquisition protocols. To address these issues, we present a new deep learning algorithm combining three models that achieves accurate and objective breast density classification. The first model performs the normal four-class breast density classification, the second model performs a two-class low (A or B) vs. high (C or D) classification, and the third patch-based model focuses on improving the accuracy of the B and C categories. We present initial results from 9989 studies from a three-site dataset with BIRADS 4th and 5th edition ground truth.
Tumor tracking and progression analysis using medical images is a crucial task for physicians to provide accurate and
efficient treatment plans, and monitor treatment response. Tumor progression is tracked by manual measurement of
tumor growth performed by radiologists. Several methods have been proposed to automate these measurements with
segmentation, but many current algorithms are confounded by attached organs and vessels. To address this problem, we
present a new generalized tumor propagation model considering time-series prior images and local anatomical features
using a Hierarchical Hidden Markov model (HMM) for tumor tracking. First, we apply the multi-atlas segmentation
technique to identify organs/sub-organs using pre-labeled atlases. Second, we apply a semi-automatic direct 3D
segmentation method to label the initial boundary between the lesion and neighboring structures. Third, we detect
vessels in the ROI surrounding the lesion. Finally, we apply the propagation model with the labeled organs and vessels
to accurately segment and measure the target lesion. The algorithm has been designed in a general way to be applicable
to various body parts and modalities. In this paper, we evaluate the proposed algorithm on lung and lung nodule
segmentation and tracking. We report the algorithm’s performance by comparing the longest diameter and nodule
volumes using the FDA lung Phantom data and a clinical dataset.
We present a method for semi-automatic segmentation of lung nodules in chest CT that can be extended to general lesion
segmentation in multiple modalities. Most semi-automatic algorithms for lesion segmentation or similar tasks use
region-growing or edge-based contour finding methods such as level-set. However, lung nodules and other lesions are
often connected to surrounding tissues, which makes these algorithms prone to growing the nodule boundary into the
surrounding tissue. To solve this problem, we apply a 3D extension of the 2D edge linking method with dynamic
programming to find a closed surface in a spherical representation of the nodule ROI. The algorithm requires a user to
draw a maximal diameter across the nodule in the slice in which the nodule cross section is the largest. We report the
lesion volume estimation accuracy of our algorithm on the FDA lung phantom dataset, and the RECIST diameter
estimation accuracy on the lung nodule dataset from the SPIE 2016 lung nodule classification challenge. The phantom
results in particular demonstrate that our algorithm has the potential to mitigate the disparity in measurements performed
by different radiologists on the same lesions, which could improve the accuracy of disease progression tracking.
Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician’s interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report testing results for the new algorithm using both human and mouse colonoscopy data.
Breast cancer is a one of the most common forms of cancer in terms of new cases and deaths both in the United States and worldwide. However, the survival rate with breast cancer is high if it is detected and treated before it spreads to other parts of the body. The most common screening methods for breast cancer are mammography and digital tomosynthesis, which involve acquiring X-ray images of the breasts that are interpreted by radiologists. The work described in this paper is aimed at optimizing the presentation of mammography and tomosynthesis images to the radiologist, thereby improving the early detection rate of breast cancer and the resulting patient outcomes. Breast cancer tissue has greater density than normal breast tissue, and appears as dense white image regions that are asymmetrical between the breasts. These irregularities are easily seen if the breast images are aligned and viewed side-by-side. However, since the breasts are imaged separately during mammography, the images may be poorly centered and aligned relative to each other, and may not properly focus on the tissue area. Similarly, although a full three dimensional reconstruction can be created from digital tomosynthesis images, the same centering and alignment issues can occur for digital tomosynthesis. Thus, a preprocessing algorithm that aligns the breasts for easy side-by-side comparison has the potential to greatly increase the speed and accuracy of mammogram reading. Likewise, the same preprocessing can improve the results of automatic tissue classification algorithms for mammography. In this paper, we present an automated segmentation algorithm for mammogram and tomosynthesis images that aims to improve the speed and accuracy of breast cancer screening by mitigating the above mentioned problems. Our algorithm uses information in the DICOM header to facilitate preprocessing, and incorporates anatomical region segmentation and contour analysis, along with a hidden Markov model (HMM) for processing the multi-frame tomosynthesis images. The output of the algorithm is a new set of images that have been processed to show only the diagnostically relevant region and align the breasts so that they can be easily compared side-by-side. Our method has been tested on approximately 750 images, including various examples of mammogram, tomosynthesis, and scanned images, and has correctly segmented the diagnostically relevant image region in 97% of cases.
With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing
colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically
valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate
the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the
abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video
are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality
frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To
address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative
frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information.
Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out
uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model
(EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis
system for colonoscopy video.
Image-based camera motion estimation from video or still images is a difficult problem in the field of computer vision.
Many algorithms have been proposed for estimating intrinsic camera parameters, detecting and matching features
between images, calculating extrinsic camera parameters based on those features, and optimizing the recovered
parameters with nonlinear methods. These steps in the camera motion inference process all face challenges in practical
applications: locating distinctive features can be difficult in many types of scenes given the limited capabilities of current
feature detectors, camera motion inference can easily fail in the presence of noise and outliers in the matched features,
and the error surfaces in optimization typically contain many suboptimal local minima. The problems faced by these
techniques are compounded when they are applied to medical video captured by an endoscope, which presents further
challenges such as non-rigid scenery and severe barrel distortion of the images. In this paper, we study these problems
and propose the use of prior probabilities to stabilize camera motion estimation for the application of computing
endoscope motion sequences in colonoscopy.
Colonoscopy presents a special case for camera motion estimation in which it is possible to characterize typical motion
sequences of the endoscope. As the endoscope is restricted to move within a roughly tube-shaped structure,
forward/backward motion is expected, with only small amounts of rotation and horizontal movement. We formulate a
probabilistic model of endoscope motion by maneuvering an endoscope and attached magnetic tracker through a
synthetic colon model and fitting a distribution to the observed motion of the magnetic tracker. This model enables us to
estimate the probability of the current endoscope motion given previously observed motion in the sequence. We add
these prior probabilities into the camera motion calculation as an additional penalty term in RANSAC to help reject
improbable motion parameters caused by outliers and other problems with medical data. This paper presents the
theoretical basis of our method along with preliminary results on indoor scenes and synthetic colon images.