Invasive cardiac angiography (catheterization) is still the standard in clinical practice for diagnosing coronary artery disease (CAD) but it involves a high amount of risk and cost. New generations of CT scanners can acquire high-quality images of coronary arteries which allow for an accurate identification and delineation of stenoses. Recently, computational fluid dynamics (CFD) simulation has been applied to coronary blood flow using geometric lumen models extracted from CT angiography (CTA). The computed pressure drop at stenoses proved to be indicative for ischemia-causing lesions, leading to non-invasive fractional flow reserve (FFR) derived from CTA. Since the diagnostic value of non-invasive procedures for diagnosing CAD relies on an accurate extraction of the lumen, a precise segmentation of the coronary arteries is crucial. As manual segmentation is tedious, time-consuming and subjective, automatic procedures are desirable. We present a novel fully-automatic method to accurately segment the lumen of coronary arteries in the presence of calcified and non-calcified plaque. Our segmentation framework is based on three main steps: boundary detection, calcium exclusion and surface optimization. A learning-based boundary detector enables a robust lumen contour detection via dense ray-casting. The exclusion of calcified plaque is assured through a novel calcium exclusion technique which allows us to accurately capture stenoses of diseased arteries. The boundary detection results are incorporated into a closed set formulation whose minimization yields an optimized lumen surface. On standardized tests with clinical data, a segmentation accuracy is achieved which is comparable to clinical experts and superior to current automatic methods.
Automatic segmentation of the left atrium (LA) with the left atrial appendage (LAA) and the pulmonary vein (PV) trunks is important for intra-operative guidance in radio-frequency catheter ablation to treat atrial fibrillation (AF). Recently, we proposed a model-based method1, 2 for LA segmentation from the C-arm CT images using marginal space learning (MSL).3 However, on some data, the mesh from the model-based segmentation cannot exactly fit the true boundary of the left atrium in the image since the method does not make full use of local voxel-wise intensity information. Furthermore, due to the large variations of the PV drainage pattern, extra right middle pulmonary veins are not included in the LA model. In this paper, a graph-based method is proposed by exploiting the graph cuts method to refine results from the model-based segmentation and extract right middle pulmonary veins. We first build regions of interest to constrain the segmentation. The region growing method is used to construct graphs within the regions of interest for the graph cuts optimization. The graph cuts optimization is then performed and newly segmented foreground voxels are assigned into different parts of the left atrium. For the extraction of right middle pulmonary veins, occasional false positive PVs are removed by examining multiple criteria. Experiments demonstrate that the proposed graph-based method is effective and efficient to improve the LA segmentation accuracy and extract right middle PVs.
KEYWORDS: Sensors, Fluoroscopy, Visualization, Surgery, 3D modeling, Detection and tracking algorithms, Visibility, Motion models, Visual process modeling, Video
The pigtail catheter is a type of catheter inserted into the human body during interventional surgeries such
as the transcatheter aortic valve implantation (TAVI). The catheter is characterized by a tightly curled end in
order to remain attached to a valve pocket during the intervention, and it is used to inject contrast agent for the
visualization of the vessel in fluoroscopy. Image-based detection of this catheter is used during TAVI, in order to
overlay a model of the aorta and enhance visibility during the surgery. Due to the different possible projection
angles in fluoroscopy, the pigtail tip can appear in a variety of different shapes spanning from pure circular to
ellipsoid or even line. Furthermore, the appearance of the catheter tip is radically altered when the contrast
agent is injected during the intervention or when it is occluded by other devices. All these factors make the
robust real-time detection and tracking of the pigtail catheter a challenging task. To address these challenges,
this paper proposes a new tree-structured, hierarchical detection scheme, based on a shape categorization of the
pigtail catheter tip, and a combination of novel Haar features. The proposed framework demonstrates improved
detection performance, through a validation on a data set consisting of 272 sequences with more than 20,000
images. The detection framework presented in this paper is not limited to pigtail catheter detection, but it can
also be applied successfully to any other shape-varying object with similar characteristics.
Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery
disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction
methods have been proposed and most of them are based on shortest path computation given one or two end
points on the artery. The major variation of the shortest path based approaches is in the different vesselness
measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian
vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning
based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated
dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree
(PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score
to those outside. The detection score can be treated as a vesselness measurement in the computation of the
shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it
can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification
only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from
the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve
the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional
Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process
a large volume with a typical size of 512x512x200 voxels.
KEYWORDS: 3D modeling, Data modeling, Principal component analysis, Sensors, Visualization, Distance measurement, Visual process modeling, 3D metrology, Heart, Optical tracking
Aortic valve disorders are the most frequent form of valvular heart disorders (VHD) affecting nearly 3% of
the global population. A large fraction among them are aortic root diseases, such as aortic root aneurysm,
often requiring surgical procedures (valve-sparing) as a treatment. Visual non-invasive assessment techniques
could assist during pre-selection of adequate patients, planning procedures and afterward evaluation of the same.
However state of the art approaches try to model a rather short part of the aortic root, insufficient to assist
the physician during intervention planning. In this paper we propose a novel approach for morphological and
functional quantification of both the aortic valve and the ascending aortic root. A novel physiological shape
model is introduced, consisting of the aortic valve root, leaflets and the ascending aortic root. The model
parameters are hierarchically estimated using robust and fast learning-based methods. Experiments performed
on 63 CT sequences (630 Volumes) and 20 single phase CT volumes demonstrated an accuracy of 1.45mm and
an performance of 30 seconds (3D+t) for this approach. To the best of our knowledge this is the first time a
complete model of the aortic valve (including leaflets) and the ascending aortic root, estimated from CT, has
been proposed.
We recently proposed a robust heart chamber segmentation approach based on marginal space learning. In this paper, we focus on improving the LV endocardium segmentation accuracy by searching for an optimal smooth mesh that tightly encloses the whole blood pool. The refinement procedure is formulated as an optimization problem: maximizing the surface smoothness under the tightness constraint. The formulation is a convex quadratic programming problem, therefore has a unique global optimum and can be solved efficiently. Our approach has been validated on the largest cardiac CT dataset (457 volumes from 186 patients) ever reported. Compared to our previous work, it reduces the mean point-to-mesh error from 1.13 mm to 0.84 mm (22% improvement). Additionally, the system has been extensively tested on a dataset with 2000+ volumes without any major failure.
Magnetic resonance imaging (MRI) is currently the gold standard for left ventricle (LV) quantification. Detection of the LV in an MRI image is a prerequisite for functional measurement. However, due to the large variations in orientation, size, shape, and image intensity of the LV, automatic detection of the LV is still a challenging problem. In this paper, we propose to use marginal space learning (MSL) to exploit the recent advances in learning discriminative classifiers. Instead of learning a monolithic classifier directly in the five dimensional object pose space (two dimensions for position, one for orientation, and two for anisotropic scaling) as full space learning (FSL) does, we train three detectors, namely, the position detector, the position-orientation detector, and the position-orientation-scale detector. Comparative experiments show that MSL significantly outperforms FSL in both speed and accuracy. Additionally, we also detect several LV landmarks, such as the LV apex and two annulus points. If we combine the detected candidates from both the whole-object detector and landmark detectors, we can further improve the system robustness. A novel voting based strategy is devised to combine the detected candidates by all detectors. Experiments show component-based voting can reduce the detection
outliers.
KEYWORDS: Heart, 3D modeling, Image segmentation, Statistical modeling, Data modeling, Process modeling, Systems modeling, Computed tomography, Databases, 3D image processing
Multi-chamber heart segmentation is a prerequisite for quantification of the cardiac function. In this paper, we propose an automatic heart chamber segmentation system. There are two closely related tasks to develop such a system: heart modeling and automatic model fitting to an unseen volume. The heart is a complicated non-rigid organ with four chambers and several major vessel trunks attached. A flexible and accurate model is necessary to capture the heart chamber shape at an appropriate level of details. In our four-chamber surface mesh model, the following two factors are considered and traded-off: 1) accuracy in anatomy and 2) easiness for both annotation and automatic detection. Important landmarks such as valves and cusp points on the interventricular septum are explicitly represented in our model. These landmarks can be detected reliably to guide the automatic model fitting process. We also propose two mechanisms, the rotation-axis based and parallel-slice based resampling methods, to establish mesh point correspondence, which is necessary to build a statistical shape model to enforce priori shape constraints in the model fitting procedure. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9-dimensional similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This is the first study reporting stable results on a large cardiac CT dataset with 323 volumes. In addition, we achieve a speed of less than eight seconds for automatic segmentation of all four chambers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.