PurposeDiagnosis and surveillance of thoracic aortic aneurysm (TAA) involves measuring the aortic diameter at various locations along the length of the aorta, often using computed tomography angiography (CTA). Currently, measurements are performed by human raters using specialized software for three-dimensional analysis, a time-consuming process, requiring 15 to 45 min of focused effort. Thus, we aimed to develop a convolutional neural network (CNN)-based algorithm for fully automated and accurate aortic measurements.ApproachUsing 212 CTA scans, we trained a CNN to perform segmentation and localization of key landmarks jointly. Segmentation mask and landmarks are subsequently used to obtain the centerline and cross-sectional diameters of the aorta. Subsequently, a cubic spline is fit to the aortic boundary at the sinuses of Valsalva to avoid errors related inclusions of coronary artery origins. Performance was evaluated on a test set of 60 scans with automated measurements compared against expert manual raters.ResultCompared to training separate networks for each task, joint training yielded higher accuracy for segmentation, especially at the boundary (p < 0.001), but a marginally worse (0.2 to 0.5 mm) accuracy for landmark localization (p < 0.001). Mean absolute error between human and automated was ≤1 mm at six of nine standard clinical measurement locations. However, higher errors were noted in the aortic root and arch regions, ranging between 1.4 and 2.2 mm, although agreement of manual raters was also lower in these regions.ConclusionFully automated aortic diameter measurements in TAA are feasible using a CNN-based algorithm. Automated measurements demonstrated low errors that are comparable in magnitude to those with manual raters; however, measurement error was highest in the aortic root and arch.
Diagnosis of thoracic aortic aneurysm typically involves measuring the diameters at various locations on the aorta from computed tomography angiograms (CTAs). Human measurement is time-consuming and suffers from inter and intra-user variability, motivating the need for automated, repeatable measurement software. This work presents a convolutional neural network (CNN)-based algorithm for fully automated aortic measurements. We employ the CNN to perform aortic segmentation and localization of key landmarks jointly, which performs better than individual models for each task. The segmentation mask and landmarks are subsequently used to obtain the centerline and cross-sectional diameters of the aorta using a combination of image processing techniques. We gather a dataset of CTAs from patients with ongoing imaging surveillance of thoracic aortic aneurysm and demonstrate the performance of our algorithm by quantitative comparisons against measurements from human raters. We observe that for most locations, the mean absolute error between human and computer-generated measurements is less than 1 mm, which is at or lower than the level of variability in human measurements. Furthermore, we showcase the behavior of our method through various visual examples, discuss its limitations and propose possible improvements.
Landmark detection is a critical component of the image processing pipeline for automated aortic size measurements. Given that the thoracic aorta has a relatively conserved topology across the population and that a human annotator with minimal training can estimate the location of unseen landmarks from limited examples, we proposed an auxiliary learning task to learn the implicit topology of aortic landmarks through a CNN-based network. Specifically, we created a network to predict the location of missing landmarks from the visible ones by minimizing the Implicit Topology loss in an end-to-end manner. The proposed learning task can be easily adapted and combined with Unet-style backbones. To validate our method, we utilized a dataset consisting of 207 CTAs, labeling four landmarks on each aorta. Our method outperforms the state-of-the-art Unet-style architectures (ResUnet, UnetR) in terms of localization accuracy, with only a light (#params=0.4M) overhead. We also demonstrate our approach in two clinically meaningful applications: aortic sub-region division and automatic centerline generation.
Accurate segmentation of the aorta in computed tomography angiography (CTA) images is the first step for analysis of diseases such as aortic aneurysm, but manual segmentation can be prohibitively time-consuming and error prone. Convolutional neural network (CNN) based models have been utilized for automated segmentation of anatomy in CTA scans, with the ubiquitous U-Net being one of the most popular architectures. For many downstream image analysis tasks (e.g., registration, diameter measurement) very accurate segmentation accuracy may be required. In this work, we developed and tested a U-Net model with attention gating for segmentation of the thoracic aorta in clinical CTA data of patients with thoracic aortic aneurysm. Attention gating helps the model focus on difficult to segment target structures automatically and has been previously shown to increase segmentation accuracy in other applications. We trained U-Nets both with and without attention gating on 145 CTAs. Performance of the models were evaluated by calculating the DCS and Average Hausdorff Distance (AHD) on a test set of 20 CTAs. We found that the U-Net with attention gating yields more accurate segmentation than the U-Net without attention gating (DCS 0.966±0.028 vs. 0.944±0.022, AHD 0.189±0.134mm vs. 0.247±0.155mm). Furthermore, we explored the segmentation accuracy of this U-Net for multi-class labeling of various anatomic segments of the thoracic aorta, and found an average DCS of 0.86 for across 7 different labels. We conclude that the U-Net with attention gating improves segmentation performance and may aid segmentation tasks that require high levels of accuracy.
Thoracic aortic aneurysm (TAA) growth is currently assessed by changes in maximal aortic diameter (i.e., radial growth of the vessel). However, there is growing awareness that longitudinal aortic growth (i.e., elongation) is an important metric of disease status, albeit one that is difficult to measure using current clinical techniques. Previously we have proposed a method to assess 3D changes in aortic wall growth/deformation using deformable image registration with interpolation of the spatial Jacobian determinant to the aortic surface. Here we propose a method to re-orient the Jacobian into directional components relative to the aortic surface rather than the image space, allowing clinicians and researchers to isolate and study the pathologic effects of each directional component of growth separately. To this end, we first perform a deformable image registration between two aortic geometries. Second, we segment the aortic surface and centerline in the fixed image, and use the resulting geometry to construct anatomically-based local coordinate systems at each voxel of the aortic surface. Using the Jacobian matrix field resulting from the deformable registration, we can obtain the anatomically oriented Jacobian components by rotating the Jacobian matrix at each voxel so that it is aligned with the anatomically based local coordinate system. Through experiments on toy cylinders and real clinical cases, we show clear differences between the Jacobian determinant and its directional components, with the directional Jacobian component being able to remove one directional change (e.g., longitudinal) while maintaining the other (e.g., cross-sectional).
Accurate and artifact-free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. A projection matrix-based calibration method to enable C-arm inverse geometry CT (IGCT) is proposed. The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized three-dimensional-to-two-dimensional projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. The relative root-mean-square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method versus 7.7% without calibration. In phantom studies, reconstruction of SBDX projections using the proposed method eliminated artifacts present in noncalibrated reconstructions. The proposed IGCT calibration method reduces image artifacts when uncertainties exist in system geometry.
Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system
geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse
geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse
geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational
acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous
to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array
is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared
deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration
performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central
ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered
translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative rootmean-
square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7%
without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of
experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated
reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in
system geometry.
KEYWORDS: 3D modeling, X-rays, Visualization, 3D image processing, Image registration, Instrument modeling, Motion models, 3D image reconstruction, Computer simulations, Detection and tracking algorithms, 3D acquisition
Transcatheter aortic valve replacement (TAVR) requires navigation and deployment of a prosthetic valve within the aortic annulus under fluoroscopic guidance. To support improved device visualization in this procedure, this study investigates the feasibility of frame-by-frame 3D reconstruction of a moving and expanding prosthetic valve structure from simultaneous bi-plane x-ray views. In the proposed method, a dynamic 3D model of the valve is used in a 2D/3D registration framework to obtain a reconstruction of the valve. For each frame, valve model parameters describing position, orientation, expansion state, and deformation are iteratively adjusted until forward projections of the model match both bi-plane views. Simulated bi-plane imaging of a valve at different signal-difference-to-noise ratio (SDNR) levels was performed to test the approach. 20 image sequences with 50 frames of valve deployment were simulated at each SDNR. The simulation achieved a target registration error (TRE) of the estimated valve model of 0.93 ± 2.6 mm (mean ± S.D.) for the lowest SDNR of 2. For higher SDNRs (5 to 50) a TRE of 0.04 mm ± 0.23 mm was achieved. A tabletop phantom study was then conducted using a TAVR valve. The dynamic 3D model was constructed from high resolution CT scans and a simple expansion model. TRE was 1.22 ± 0.35 mm for expansion states varying from undeployed to fully deployed, and for moderate amounts of inter-frame motion. Results indicate that it is feasible to use bi-plane imaging to recover the 3D structure of deformable catheter devices.
We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of
1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.