KEYWORDS: Digital breast tomosynthesis, Breast cancer, Data fusion, Image fusion, Image processing, Magnetic resonance imaging, Mammography, Image registration, Machine learning, Breast imaging
Machine learning (ML) has made great advancements in imaging for breast cancer detection, including reducing radiologists read times, yet its performance is still reported to be at best similar to that of expert radiologists. This leaves a performance gap between what is desired by radiologists and what can actually be achieved in terms of early detection, reduction of excessive false positives and minimization of unnecessary biopsies. We have seen a similar situation with military intelligence that is expressed by operators as “drowning in data and starving for information”. We invented Upstream Data Fusion (UDF) to help fill the gap. ML is used to produce candidate detections for individual sensing modalities with high detection rates and high false positive rates. Data fusion is used to combine modalities and dramatically diminish false positives. Upstream data, that is closer to raw data, is hard for operators to visualize. Yet it is used for fusion to recover information that would otherwise be lost by the processing to make it visually acceptable to humans. Our research with breast cancer detection involving the fusion of Digital Breast Tomosynthesis (DBT) with Magnetic Resonance Imaging (MRI) and also the fusion of DBT with ultrasound (US) data has yielded preliminary results which lead us to conclude that UDF can help to both fill the performance gap and reduce radiologist read time. Our findings suggest that UDF, combined with ML techniques, can result in paradigm changes in the achievable accuracy and efficiency of early breast cancer detection.
The accelerating complexity and variety of medical imaging devices and methods have outpaced the ability to evaluate and optimize their design and clinical use. This is a significant and increasing challenge for both scientific investigations and clinical applications. Evaluations would ideally be done using clinical imaging trials. These experiments, however, are often not practical due to ethical limitations, expense, time requirements, or lack of ground truth. Virtual clinical trials (VCTs) (also known as in silico imaging trials or virtual imaging trials) offer an alternative means to efficiently evaluate medical imaging technologies virtually. They do so by simulating the patients, imaging systems, and interpreters. The field of VCTs has been constantly advanced over the past decades in multiple areas. We summarize the major developments and current status of the field of VCTs in medical imaging. We review the core components of a VCT: computational phantoms, simulators of different imaging modalities, and interpretation models. We also highlight some of the applications of VCTs across various imaging modalities.
This research aims to develop a new feature guided motion estimation method for the left ventricular wall in gated
cardiac imaging. The guiding feature is the “footprint” of one of the papillary muscles, which is the attachment of the
papillary muscle on the endocardium. Myocardial perfusion (MP) PET images simulated from the 4-D XCAT phantom,
which features papillary muscles, realistic cardiac motion with known motion vector field (MVF), were employed in the
study. The 4-D MVF of the heart model of the XCAT phantom was used as a reference. For each MP PET image, the 3-
D “footprint” surface of one of the papillary muscles was extracted and its centroid was calculated. The motion of the
centroid of the “footprint” throughout a cardiac cycle was tracked and analyzed in 4-D. This motion was extrapolated to
throughout the entire heart to build a papillary muscle guided initial estimation of the 4-D cardiac MVF. A previous
motion estimation algorithm was applied to the simulated gated myocardial PET images to estimate the MVF. Three
different initial MVF estimates were used in the estimation, including zero initial (0-initial), the papillary muscle guided
initial (P-initial), and the true MVF from phantom (T-initial). Qualitative and quantitative comparison between the
estimated MVFs and the true MVF showed the P-initial provided more accurate motion estimation in longitudinal
motion than the 0-initial with over 70% improvement and comparable accuracy with that of the T-initial. We concluded
that when the footprint can be tracked accurately, this feature guided approach will significantly improve the accuracy
and robustness of traditional optical flow based motion estimation method.
The x-ray spectrum recorded by a photon-counting x-ray detector (PCXD) is distorted due to the following physical
effects which are independent of the count rate: finite
energy-resolution, Compton scattering, charge-sharing, and Kescape.
If left uncompensated, the spectral response (SR) of a PCXD due to the above effects will result in image
artifacts and inaccurate material decomposition. We propose a new SR compensation (SRC) algorithm using the
sinogram restoration approach. The two main contributions of our proposed algorithm are: (1) our algorithm uses an
efficient conjugate gradient method in which the first and second derivatives of the cost functions are directly calculated
analytically, whereas a slower optimization method that requires numerous function evaluations was used in other work;
(2) our algorithm guarantees convergence by combining the non-linear conjugate gradient method with line searches that
satisfy Wolfe conditions, whereas the algorithm in other work is not backed by theorems from optimization theory to
guarantee convergence. In this study, we validate the performance of the proposed algorithm using computer
simulations. The bias was reduced to zero from 11%, and image artifacts were removed from the reconstructed images.
Quantitative K-edge imaging in possible only when SR compensation is done.
In clinical computed tomography (CT), images from patient examinations taken with conventional scanners
exhibit noise characteristics governed by electronics noise, when scanning strongly attenuating obese patients
or with an ultra-low X-ray dose. Unlike CT systems based on energy integrating detectors, a system with a
quantum counting detector does not suffer from this drawback. Instead, the noise from the electronics mainly
affects the spectral resolution of these detectors. Therefore, it does not contribute to the image noise in spectrally
non-resolved CT images. This promises improved image quality due to image noise reduction in scans obtained
from clinical CT examinations with lowest X-ray tube currents or obese patients. To quantify the benefits of
quantum counting detectors in clinical CT we have carried out an extensive simulation study of the complete
scanning and reconstruction process for both kinds of detectors. The simulation chain encompasses modeling
of the X-ray source, beam attenuation in the patient, and calculation of the detector response. Moreover,
in each case the subsequent image preprocessing and reconstruction is modeled as well. The simulation-based,
theoretical evaluation is validated by experiments with a novel prototype quantum counting system and a Siemens
Definition Flash scanner with a conventional energy integrating CT detector. We demonstrate and quantify the
improvement from image noise reduction achievable with quantum counting techniques in CT examinations with
ultra-low X-ray dose and strong attenuation.
The aim of this research is to develop a complete CT/human-model simulation package by integrating the 4D eXtended
CArdiac-Torso (XCAT) phantom, a computer generated NURBS surface based phantom that provides a realistic model
of human anatomy and respiratory and cardiac motions, and the DRASIM (Siemens Healthcare) CT-data simulation
program. Unlike other CT simulation tools which are based on simple mathematical primitives or voxelized phantoms,
this new simulation package has the advantages of utilizing a realistic model of human anatomy and physiological
motions without voxelization and with accurate modeling of the characteristics of clinical Siemens CT systems. First,
we incorporated the 4D XCAT anatomy and motion models into DRASIM by implementing a new library which
consists of functions to read-in the NURBS surfaces of anatomical objects and their overlapping order and material
properties in the XCAT phantom. Second, we incorporated an efficient ray-tracing algorithm for line integral calculation
in DRASIM by computing the intersection points of the rays cast from the x-ray source to the detector elements through
the NURBS surfaces of the multiple XCAT anatomical objects along the ray paths. Third, we evaluated the integrated
simulation package by performing a number of sample simulations of multiple x-ray projections from different views
followed by image reconstruction. The initial simulation results were found to be promising by qualitative evaluation. In
conclusion, we have developed a unique CT/human-model simulation package which has great potential as a tool in the
design and optimization of CT scanners, and the development of scanning protocols and image reconstruction methods
for improving CT image quality and reducing radiation dose.
We present an efficient scheme for the forward and backward projector implementation for helical cone-beam
x-ray CT reconstruction using a pre-calculated and stored system matrix approach. Because of the symmetry of
a helical source trajectory, it is sufficient to calculate and store the system matrix entries for one image slice only
and for all source positions illuminating it. The system matrix entries for other image slices are copies of those
stored values. In implementing an iterative image reconstruction method, the internal 3D image volume can be
based on a non-Cartesian grid so that no system matrix interpolation is needed for the repeated forward and
backward projection calculation. Using the proposed scheme, the memory requirement for the reconstruction of
a full field-of-view of clinical scanners is manageable on current computing platforms. The same storage principle
can be generalized and applied to iterative volume-of-interest image reconstruction for helical cone-beam CT.
We demonstrate by both computer simulations and clinical patient data the speed and image quality of VOI
image reconstruction using the proposed stored system matrix approach. We believe the proposed method may
contribute to bringing iterative reconstruction to the clinical practice.
We describe a continuing design and development of MR-compatible SPECT systems for simultaneous SPECT-MR
imaging of small animals. A first generation prototype SPECT system was designed and constructed to fit inside a MRI
system with a gradient bore inner diameter of 12 cm. It consists of 3 angularly offset rings of 8 detectors (1"x1", 16x16
pixels MR-compatible solid-state CZT). A matching 24-pinhole collimator sleeve, made of a tungsten-compound,
provides projections from a common FOV of ~25 mm. A birdcage RF coil for MRI data acquisition surrounds the
collimator. The SPECT system was tested inside a clinical 3T MRI system. Minimal interference was observed on the
simultaneously acquired SPECT and MR images. We developed a sparse-view image reconstruction method based on
accurate modeling of the point response function (PRF) of each of the 24 pinholes to provide artifact-free SPECT
images. The stationary SPECT system provides relatively low resolution of 3-5 mm but high geometric efficiency of 0.5-
1.2% for fast dynamic acquisition, demonstrated in a SPECT renal kinetics study using Tc-99m DTPA. Based on these
results, a second generation prototype MR-compatible SPECT system with an outer diameter of 20 cm that fits inside a
mid-sized preclinical MRI system is being developed. It consists of 5 rings of 19 CZT detectors. The larger ring diameter
allows the use of optimized multi-pinhole collimator designs, such as high system resolution up to ~1 mm, high
geometric efficiency, or lower system resolution without collimator rotation. The anticipated performance of the new
system is supported by simulation data.
We propose the use of a time-of-flight (TOF) camera to obtain the patient's body contour in 3D guided imaging
reconstruction scheme in CT and C-arm imaging systems with truncated projection. In addition to pixel intensity, a TOF
camera provides the 3D coordinates of each point in the captured scene with respect to the camera coordinates.
Information from the TOF camera was used to obtain a digitized surface of the patient's body. The digitization points are
transformed to X-Ray detector coordinates by registering the two coordinate systems. A set of points corresponding to
the slice of interest are segmented to form a 2D contour of the body surface. Radon transform is applied to the contour to
generate the 'trust region' for the projection data. The generated 'trust region' is integrated as an input to augment the
projection data. It is used to estimate the truncated, unmeasured projections using linear interpolation. Finally the image
is reconstructed using the combination of the estimated and the measured projection data. The proposed method is
evaluated using a physical phantom. Projection data for the phantom were obtained using a C-arm system. Significant
improvement in the reconstructed image quality near the truncation edges was observed using the proposed method as
compared to that without truncation correction. This work shows that the proposed 3D guided CT image reconstruction
using a TOF camera represents a feasible solution to the projection data truncation problem.
The overall aim of this work was to evaluate the potential for improving in vivo small animal microCT through the use of
an energy resolved photon-counting detector. To this end, we developed and evaluated a prototype microCT system
based on a second-generation photon-counting x-ray detector which simultaneously counted photons with energies above
six energy thresholds. First, we developed a threshold tuning procedure to reduce the dependence of detector uniformity
and to reduce ring artifacts. Next, we evaluated the system in terms of the contrast-to-noise ratio in different energy
windows for different target materials. These differences provided the possibility to weight the data acquired in different
windows in order to optimize the contrast-to-noise ratio. We also explored the ability of the system to use data from
different energy windows to aid in distinguishing various materials. We found that the energy discrimination capability
provided the possibility for improved contrast-to-noise ratios and allowed separation of more than two materials, e.g.,
bone, soft-tissue and one or more contrast materials having K-absorption edges in the energy ranges of interest.
The goal of the study was to investigate data acquisition strategies and image reconstruction methods for a stationary SPECT insert that can operate inside an MRI scanner with a 12 cm bore diameter for simultaneous SPECT/MRI imaging of small animals. The SPECT insert consists of 3 octagonal rings of 8 MR-compatible CZT detectors per ring surrounding a multi-pinhole (MPH) collimator sleeve. Each pinhole is constructed to project the field-of-view (FOV) to one CZT detector. All 24 pinholes are focused to a cylindrical FOV of 25 mm in diameter and 34 mm in length. The data acquisition strategies we evaluated were optional collimator rotations to improve tomographic sampling; and the image reconstruction methods were iterative
ML-EM with and without compensation for the geometric response function (GRF) of the MPH collimator.
For this purpose, we developed an analytic simulator that calculates the system matrix with the GRF models
of the MPH collimator. The simulator was used to generate projection data of a digital rod phantom with
pinhole aperture sizes of 1 mm and 2 mm and with different collimator rotation patterns. Iterative ML-EM
reconstruction with and without GRF compensation were used to reconstruct the projection data from the
central ring of 8 detectors only, and from all 24 detectors. Our results indicated that without GRF compensation
and at the default design of 24 projection views, the reconstructed images had significant artifacts. Accurate
GRF compensation substantially improved the reconstructed image resolution and reduced image artifacts. With accurate GRF compensation, useful reconstructed images can be obtained using 24 projection views only. This last finding potentially enables dynamic SPECT (and/or MRI) studies in small animals, one of many possible application areas of the SPECT/MRI system. Further research efforts are warranted including experimentally measuring the system matrix for improved geometrical accuracy, incorporating the co-registered MRI image in SPECT reconstruction, and exploring potential applications of the simultaneous SPECT/MRI SA system including dynamic SPECT studies.
This work studies the dual formulation of a penalized maximum likelihood reconstruction problem in x-ray
CT. The primal objective function is a Poisson log-likelihood combined with a weighted cross-entropy penalty
term. The dual formulation of the primal optimization problem is then derived and the optimization procedure
outlined. The dual formulation better exploits the structure of the problem, which translates to faster convergence
of iterative reconstruction algorithms. A gradient descent algorithm is implemented for solving the dual problem
and its performance is compared with the filtered back-projection algorithm, and with the primal formulation
optimized by using surrogate functions. The 3D XCAT phantom and an analytical x-ray CT simulator are used
to generate noise-free and noisy CT projection data set with monochromatic and polychromatic x-ray spectrums.
The reconstructed images from the dual formulation delineate the internal structures at early iterations better
than the primal formulation using surrogate functions. However the body contour is slower to converge in the
dual than in the primal formulation. The dual formulation demonstrate better noise-resolution tradeoff near the
internal organs than the primal formulation. Since the surrogate functions in general can provide a diagonal
approximation of the Hessian matrix of the objective function, further convergence speed up may be achieved
by deriving the surrogate function of the dual objective function.
We report on a characterization study of a multi-row
direct-conversion x-ray detector used to generate the first photon
counting clinical x-ray computed tomography (CT) patent images. In order to provide the photon counting detector with
adequate performance for low-dose CT applications, we have designed and fabricated a fast application specific
integrated circuit (ASIC) for data readout from the pixellated CdTe detectors that comprise the photon counting detector.
The cadmium telluride (CdTe) detector has 512 pixels with a 1 mm pitch and is vertically integrated with the ASIC
readout so it can be tiled in two dimensions similar to those that are tiled in an arc found in 32-row multi-slice CT
systems. We have measured several important detector parameters including the maximum output count rate, energy
resolution, and noise performance. Additionally the relationship between the output and input rate has been found to fit a
non-paralyzable detector model with a dead time of 160 nsec. A maximum output rate of 6 × 106 counts per second per
pixel has been obtained with a low output x-ray tube for CT operated between 0.01 mA and 6 mA at 140 keV and
different source-to-detector distances. All detector noise counts are less that 20 keV which is sufficiently low for clinical
CT. The energy resolution measured with the 60 keV photons from a 241Am source is ~12%. In conclusion, our results
demonstrate the potential for the application of the CdTe based photon counting detector to clinical CT systems. Our
future plans include further performance improvement by incorporating drift structures to each detector pixel.
KEYWORDS: 3D modeling, Computed tomography, Heart, Arteries, Image segmentation, Data modeling, Ischemia, Medical imaging, Instrument modeling, 3D image processing
A realistic 3D coronary arterial tree (CAT) has been developed for the heart model of the computer generated 3D
XCAT phantom. The CAT allows generation of a realistic model of the location, size and shape of the associated
regional ischemia or infarction for a given coronary arterial stenosis or occlusion. This in turn can be used in medical
imaging applications. An iterative rule-based generation method that systematically utilized anatomic, morphometric
and physiologic knowledge was used to construct a detailed realistic 3D model of the CAT in the XCAT phantom. The
anatomic details of the myocardial surfaces and large coronary arterial vessel segments were first extracted from cardiac
CT images of a normal patient with right coronary dominance. Morphometric information derived from porcine data
from the literature, after being adjusted by scaling laws, provided statistically nominal diameters, lengths, and
connectivity probabilities of the generated coronary arterial segments in modeling the CAT of an average human. The
largest six orders of the CAT were generated based on the physiologic constraints defined in the coronary generation
algorithms. When combined with the heart model of the XCAT phantom, the realistic CAT provides a unique
simulation tool for the generation of realistic regional myocardial ischemia and infraction. Together with the existing
heart model, the new CAT provides an important improvement over the current 3D XCAT phantom in providing a more
realistic model of the normal heart and the potential to simulate myocardial diseases in evaluation of medical imaging
instrumentation, image reconstruction, and data processing methods.
KEYWORDS: Computed tomography, 3D modeling, Bone, Image segmentation, Data modeling, Mathematical modeling, Natural surfaces, Medical imaging, Chest, Algorithm development
We create a series of detailed computerized phantoms to estimate patient organ and effective dose in pediatric CT and
investigate techniques for efficiently creating patient-specific phantoms based on imaging data. The initial anatomy of
each phantom was previously developed based on manual segmentation of pediatric CT data. Each phantom was
extended to include a more detailed anatomy based on morphing an existing adult phantom in our laboratory to match
the framework (based on segmentation) defined for the target pediatric model. By morphing a template anatomy to
match the patient data in the LDDMM framework, it was possible to create a patient specific phantom with many
anatomical structures, some not visible in the CT data. The adult models contain thousands of defined structures that
were transformed to define them in each pediatric anatomy. The accuracy of this method, under different conditions, was
tested using a known voxelized phantom as the target. Errors were measured in terms of a distance map between the
predicted organ surfaces and the known ones. We also compared calculated dose measurements to see the effect of
different magnitudes of errors in morphing. Despite some variations in organ geometry, dose measurements from
morphing predictions were found to agree with those calculated from the voxelized phantom thus demonstrating the
feasibility of our methods.
Electronic noise compensation can be important for low-dose x-ray CT applications where severe photon starvation occurs. For clinical x-ray CT systems utilizing energy-integrating detectors, it has been shown that the detected x-ray intensity is compound Poisson distributed, instead of the Poisson distribution that is often studied in the literature. We model the electronic noise contaminated signal Z as the sum of a compound Poisson distributed random variable (r.v.) Y and a Gaussian distributed electronic noise N with known mean and variance. We formulate the iterative x-ray CT reconstruction problem with electronic noise compensation as a maximum-likelihood reconstruction problem. However the likelihood function of Z is not analytically trackable; instead of working with it directly, we formulate the problem in the expectation-maximization (EM) framework, and iteratively maximize the conditional expectation of the complete log-likelihood of Y.
We further demonstrate that the conditional expectation of the surrogate function of the complete log-likelihood is a legitimate surrogate for the incomplete surrogate. Under certain linearity conditions on the surrogate function, a reconstruction algorithm with electronic noise compensation can be obtained by some modification of one previously derived without electronic noise considerations; the change incurred is simply replacing the unavailable, uncontaminated measurement Y by its conditional expectation E(Y|Z).
The calculation of E(Y|Z) depends on the model of Y, N, and Z. We propose two methods for calculating this conditional expectation when Y follows a special compound Poisson distribution - the exponential dispersion model (ED). Our methods can be regarded as an extension of similar approaches using the Poisson model to the compound Poisson model.
Two major problems with the current electrocardiogram-gated cardiac computed tomography (CT) imaging technique
are a large patient radiation dose (10-15 mSv) and insufficient temporal resolution (83-165 ms). Our long-term goal is to
develop new time resolved and low dose cardiac CT imaging techniques that consist of image reconstruction algorithms
and estimation methods of the time-dependent motion vector field (MVF) of the heart from the acquired CT data.
Toward this goal, we developed a method that estimates the 2D components of the MVF from a sequence of cardiac CT
images and used it to "reconstruct" cardiac images at rapidly moving phases. First, two sharp image frames per heart
beat (cycle) obtained at slow motion phases (i.e., mid-diastole and end-systole) were chosen. Nodes were coarsely
placed among images; and the temporal motion of each node was modeled by B-splines. Our cost function consisted of
3 terms: mean-squared-error with the block-matching, and smoothness constraints in space and time. The time-dependent
MVF was estimated by minimizing the cost function. We then warped images at slow motion phases using
the estimated vector fields to "reconstruct" images at rapidly moving phase. The warping algorithm was evaluated using
true time-dependent motion vector fields and images both provided by the NCAT phantom program. Preliminary results
from ongoing quantitative and qualitative evaluation using the 4D NCAT phantom and patient data are encouraging.
Major motion artifact is much reduced. We conclude the new image-based motion estimation technique is an important
step toward the development of the new cardiac CT imaging techniques.
Novel CdTe photon counting x-ray detectors (PCXDs) have been developed for very high count rates [1-4] suitable for
x-ray micro computed tomography (μCT) scanners. It counts photons within each of J energy bins. In this study, we
investigate use of the data in these energy bins for material decomposition using an image domain approach. In this
method, one image is reconstructed from projection data of each energy bin; thus, we have J images from J energy bins
that are associated with attenuation coefficients with a narrow energy width. We assume that the spread of energies in
each bin is small and thus that the attenuation can be modeled using an effective energy for each bin. This
approximation allows us to linearize the problem, thus simplify the inversion procedure. We then fit J attenuation
coefficients at each location x
by the energy-attenuation function [5] and obtain either (1) photoelectric and Compton
scattering components or (2) 2 or 3 basis-material components. We used computer simulations to evaluate this approach
generating projection data with three types of acquisition schemes: (A) five monochromatic energies; (B) five energy
bins with PCXD and an 80 kVp polychromatic x-ray spectrum; and (C) two kVp with an intensity integrating detector.
Total attenuation coefficients of reconstructed images and calculated effective atomic numbers were compared with
data published by National Institute of Standards and Technology (NIST). We developed a new materially defined
"SmileyZ" phantom to evaluate the accuracy of the material decomposition methods. Preliminary results showed that
material based 3-basis functions (bone, water and iodine) with PCXD with 5 energy bins was the most promising
approach for material decomposition.
Emerging photon-counting detectors with energy discrimination ability for x-ray CT perform binning according
to the energy of the incoming photons. Multiple output channels with different energy thresholds can be obtained
in one irradiation. The energy dependency of attenuation coefficients can be described by a linear combination
of basis functions, e.g., Compton scatter and photo-electric effect; their individual contributions can be differentiated
by using the multiple energy channels hence material characterization is made possible. Conventional
analytic approach is a two-step process. First decompose in the projection domain to obtain the sinograms
corresponding to the coefficients of the basis functions, then apply FBP to obtain the individual material components.
This two-step process may have poor quality and quantitative accuracy due to the lower counts in the
separate energy channels and approximation errors propagated to the image domain from projection domain
decomposition. In this work we modeled the energy dependency of linear attenuation coefficients in our problem
formulation and applied the optimality transfer principle to derive a Poisson-likelihood based algorithm for
material decomposition from multiple energy channels. Our algorithm reconstructs the coefficients of the basis
functions directly therefore the separate non-linear estimation step in the projection domain as in conventional
approaches is avoided. We performed simulations to study the accuracy and noise properties of our method.
We synthesized the linear attenuation coefficients at a reference energy and compared with standard attenuation
values provided by NIST. We also synthesized the attenuation maps at different effective energy bin centers
corresponding to the different energy channels and compared the synthesized images with reconstructions from
standard fan-beam FBP methods. Preliminary simulations showed that our reconstructed images have much better noise properties.
KEYWORDS: Sensors, X-rays, Polymethylmethacrylate, X-ray detectors, Calibration, Photon counting, Monte Carlo methods, Data modeling, Signal attenuation, Aluminum
Recently developed solid-state detectors combined with high-speed ASICs that allow individual pixel pulse processing
may prove useful as detectors for small animal micro-computed tomography. One appealing feature of these photon-counting
x-ray detectors (PCXDs) is their ability to discriminate between photons with different energies and count
them in a small number (2-5) of energy windows. The data in these energy windows may be thought of as arising from
multiple simultaneous x-ray beams with individual energy spectra, and could thus potentially be used to perform
material composition analysis. The goal of this paper was to investigate the potential advantages of PCXDs with
multiple energy window counting capability as compared to traditional integrating detectors combined with acquisition
of images using x-ray beams with 2 different kVps. For the PCXDs, we investigated 3 potential sources of crosstalk:
scatter in the object and detector, limited energy resolution, and pulse piluep. Using Monte Carlo simulations, we
showed that scatter in the object and detector results in relatively little crosstalk between the data in the energy
windows. To study the effects of energy resolution and pulse-pileup, we performed simulations evaluating the accuracy
and precision of basis decomposition using a detector with 2 or 5 energy windows and a single kVp compared to an dual
kVp acquisitions with an integrating detector. We found that, for noisy data, the precision of estimating the thickness of
two basis materials for a range of material compositions was better for the single kVp multiple energy window
acquisitions compared to the dual kVp acquisitions with an integrating detector. The advantage of the multi-window
acquisition was somewhat reduced when the energy resolution was reduced to 10 keV and when pulse pileup was
included, but standard deviations of the estimated thicknesses remained better by more than a factor of 2.
In this work we used a novel CdTe photon counting x-ray detector capable of very high count rates to perform x-ray micro-computed tomography (microCT). The detector had 2 rows of 384 square pixels each 1 mm in size. Charge signals from individual photons were integrated with a shaping time of ~60 ns and processed by an ASIC located in close proximity to the pixels. The ASIC had 5 energy thresholds with associated independent counters for each pixel. Due to the thresholding, it is possible to eliminate dark-current contributions to image noise. By subtracting counter outputs from adjacent thresholds, it is possible to obtain the number of x-ray photon counts in 5 adjacent energy windows. The detector is capable of readout times faster than 5 ms. A prototype bench-top specimen μCT scanner was assembled having distances from the tube to the object and detector of 11 cm and 82 cm, respectively. We used a conventional x-ray source to produce 80 kVp x-ray beams with tube currents up to 400 μA resulting in count rates on the order of 600 kcps per pixel at the detector. Both phantoms and a dead mouse were imaged using acquisition times of 1.8 s per view at 1° steps around the object. The count rate loss (CRL) characteristics of the detector were measured by varying the tube current and corrected for using a paralyzable model. Images were reconstructed using analytical fan-beam reconstruction. The reconstructed images showed good contrast and noise characteristics and those obtained from different energy windows demonstrated energy-dependent contrast, thus potentially allowing for material decomposition.
Design of receiver-operating characteristic (ROC)-based paradigm and data analysis methods that capture and adequately address the complexity of clinical decision-making will facilitate the development and evaluation of new image acquisition, reconstruction and processing techniques. We compare the JAFROC (Jackknife free-response ROC) to traditional ROC paradigm and analysis in two image evaluation studies. The first study is designed to address advantages of "free response" features of JAFROC paradigm in a breast lesion detection task. We developed tools allowing the acquisition of FROC-type rating data and use them on a set of simulated scintimammography images. Four observers participated in a small preliminary study. Rating data are then analyzed using traditional ROC and JAFROC techniques. The second study is aimed at comparing the diagnostic quality of myocardial perfusion SPECT (MPS) images obtained with different quantitative image reconstruction and compensation methods in an ongoing clinical trial. The observer assesses status of each of the three main vascular territories for each patient. This experimental set-up uses standardized locations of vascular territories on myocardial polar plot images, and a fixed number of three rating scores per patient. We compare results from the newly available JAFROC versus the traditional ROC analysis technique previously applied in similar studies, using a set of data from an on-going clinical trial. Comparison of two analysis methodologies reveals generally consistent behavior, corresponding to theoretical predictions.
Coronary artery imaging with multi-slice helical computed tomography is a promising noninvasive imaging technique. The current major issues include the insufficient temporal resolution and large patient dose. We propose an image reconstruction method which provides a solution to both of the problems. The method uses an iterative approach repeating the following four steps until the difference between the two projection data sets falls below a certain criteria in step-4: 1) estimating or updating the cardiac motion vectors, 2) reconstructing the time-resolved 4D dynamic volume images using the motion vectors, 3) calculating the projection data from the current 4D images, 4) comparing them with the measured ones. In this study, we obtain the first estimate of the motion vector. We use the 4D NCAT phantom, a realistic computer model for the human anatomy and cardiac motions, to generate the dynamic fan-beam projection data sets as well to provide a known truth for the motion. Then, the halfscan reconstruction with the sliding time-window technique is used to generate cine images: f(t, r r). Here, we use one heart beat for each position r so that the time information is retained. Next, the magnitude of the first derivative of f(t, r r) with respect to time, i.e., |df/dt|, is calculated and summed over a region-of-interest (ROI), which is called the mean-absolute difference (MAD). The initial estimation of the vector field are obtained using MAD for each ROI. Results of the preliminary study are presented.
A detailed four-dimensional model of the coronary artery tree has great potential in a wide variety of applications especially in biomedical imaging. We developed a computer generated three-dimensional model for the coronary arterial tree based on two datasets: (1) gated multi-slice computed tomography (MSCT) angiographic data obtained from a normal human subject and (2) statistical morphometric data obtained from porcine hearts. The main coronary arteries and heart structures were segmented from the MSCT data to define the initial segments of the vasculature and geometrical details of the boundaries. An iterative rule-based computer generation algorithm was then developed to extend the coronary artery tree beyond the initial segmented branches. The algorithm was governed by the following factors: (1) the statistical morphometric measurements of the connectivities, lengths, and diameters of the arterial segments, (2) repelling forces from other segments and boundaries, and (3) optimality principles to minimize the drag force at each bifurcation in the generated tree. Using this algorithm, the segmented coronary artery tree from the MSCT data was optimally extended to create a 3D computational model of the largest six orders of the coronary arterial tree. The new method for generating the 3D model is effective in imposing the constraints of anatomical and physiological characteristics of coronary vasculature. When combined with the 4D NCAT phantom, a computer model for the human anatomy and cardiac and respiratory motions, the new model will provide a unique tool to study cardiovascular characteristics and diseases through direct and medical imaging simulation studies.
We investigate the effect of heart rate on the quality and artifact generation in coronary artery images obtained using multi-slice computed tomography (MSCT) with the purpose of finding the optimal time resolution for data acquisition. To perform the study, we used the 4D NCAT phantom, a computer model of the normal human anatomy and cardiac and respiratory motions developed in our laboratory. Although capable of being far more realistic, the 4D NCAT cardiac model was originally designed for low-resolution imaging research, and lacked the anatomical detail to be applicable to high-resolution CT. In this work, we updated the cardiac model to include a more detailed anatomy and physiology based on high-resolution clinical gated MSCT data. To demonstrate its utility in high-resolution dynamic CT imaging research, the enhanced 4D NCAT was then used in a pilot simulation study to investigate the effect of heart rate on CT angiography. The 4D NCAT was used to simulate patients with different heart rates (60-120 beats/minute) and with various cardiac plaques of known size and location within the coronary arteries. For each simulated patient, MSCT projection data was generated with data acquisition windows ranging from 100 to 250 ms centered within the quiet phase (mid-diastole) of the heart using an analytical CT projection algorithm. CT images were reconstructed from the projection data, and the contrast of the plaques was then measured to assess the effect of heart rate and to determine the optimal time resolution required for each case. The 4D NCAT phantom with its realistic model for the cardiac motion was found to provide a valuable tool from which to optimize CT cardiac applications. Our results indicate the importance of optimizing the time resolution with regard to heart rate and plaque location for improved CT images at a reduced patient dose.
We validate the computer-based simulation tools developed in our laboratory for use in high-resolution CT research. The 4D NURBS-based cardiac-torso (NCAT) phantom was developed to provide a realistic and flexible model of the human anatomy and physiology. Unlike current phantoms in CT, the 4D NCAT has the advantage, due to its design, that its organ shapes can be changed to realistically model anatomical variations and patient motion. To efficiently simulate high-resolution CT images, we developed a unique analytic projection algorithm (including scatter and quantum noise) to accurately calculate projections directly from the surface definition of the phantom given parameters defining the CT scanner and geometry. The projection data are reconstructed into CT images using algorithms developed in our laboratory. The 4D NCAT phantom contains a level of detail that is close to impossible to produce in a physical test object. We, therefore, validate our CT simulation tools and methods through a series of direct comparisons with data obtained experimentally using existing, simple physical phantoms at different doses and using different x-ray energy spectra. In each case, the first-order simulations were found to produce comparable results (<12%). We reason that since the simulations produced equivalent results using simple test objects, they should be able to do the same in more anatomically realistic conditions. We conclude that, with the ability to provide realistic simulated CT image data close to that from actual patients, the simulation tools developed in this work will have applications in a broad range of CT imaging research.
Cardiac function is an important physiological parameter in preclinical studies. Nuclear cardiac scans are a standard of care for patients with suspected coronary artery occlusions and can assess perfusion and other physiological functions via the injection of radiotracers. In addition, correlated acquisition of nuclear images with electrocardiogram (ECG) signals can provide myocardial dynamics, which can be used to assess the wall motion of the heart. We have implemented this nuclear cardiology technique into a microSPECT/CT system, which provides sub-millimeter resolution in SPECT and co-registered high resolution CT anatomical maps. Radionuclide detection is synchronized with the R-wave of the cardiac cycle and separated into 16 time bins using an ECG monitor and triggering device for gating. Images were acquired with a 12.5 x 12.5 cm2 small field of view pixilated NaI(Tl) detector, using a pinhole collimator. In this pilot study, rats (N = 5) were injected with 99mTc-Sestamibi, a tracer of myocardium, and anesthetized for imaging. Reconstructed 4-D images (3D plus timing) were computed using an Ordered Subset Expectation Maximization (OSEM) algorithm. The measured perfusion, wall motion, and ejection fractions for the rats matched well with results reported by other researchers using alternative methods. This capability will provide a new and powerful tool to preclinical researchers for assessing cardiac function.
The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.
The quality and quantitative accuracy of transmission CT images are affected by artifacts due to truncation of the projection data. In this study, the effect of data sampling on the quantitative accuracy of transmission CT images reconstructed from truncated projections has been investigated. Parallel-beam projections with different sets of acquisition and data sampling parameters were simulated. In deciding whether a set of parameters provided sufficient data sampling, use was made of the condition number obtained from the singular value decomposition of the projection matrix. The results of the study indicate that for noise-free data the truncation artifacts which are present in images reconstructed using iterative algorithms can be reduced or completely eliminated provided that the data sampling is sufficient, and an adequate number of iterations is performed. However, when a null space is present in the singular value decomposition, the iterative reconstruction methods fail to recover the object. The convergence of the reconstructed attenuation maps depends on the sampling and is faster as the number of angles and/or the number of projection bins is increased. Furthermore, the higher the degree of truncation the larger is the number of iterations required in order to obtain accurate attenuation maps. In the presence of noise, the number of iterations required for the best compromise of noise and image detail is decreased with increased noise level and higher degree of truncation, resulting in inferior reconstructions. Finally, the use of the body contour as support in the reconstructions resulted in quantitatively superior reconstructed images.
KEYWORDS: Heart, Signal attenuation, Motion models, Single photon emission computed tomography, Mathematical modeling, Blood, Imaging systems, 3D modeling, Monte Carlo methods, Sensors
This manuscript documents the alteration of the heart model of the MCAT phantom to better represent cardiac motion. The objective of the inclusion of motion was to develop a digital simulation of the heart such that the impact of cardiac motion on single photon emission computed tomography (SPECT) imaging could be assessed and methods of quantitating cardiac function could be investigated. The motion of the dynamic MCAT's heart is modeled by a 128 time frame volume curve. Eight time frames are averaged together to obtain a gated perfusion acquisition of 16 time frames and ensure motion within every time frame. The position of the MCAT heart was changed during contraction to rotate back and forth around the long axis through the center of the left ventricle (LV) using the end systolic time frame as turning point. Simple respiratory motion was also introduced by changing the orientation of the heart model in a 2 dimensional (2D) plane with every time frame. The averaging effect of respiratory motion in a specific time frame was modeled by randomly selecting multiple heart locations between two extreme orientations. Non-gated perfusion phantoms were also generated by averaging over all time frames. Maximal chamber volumes were selected to fit a profile of a normal healthy person. These volumes were changed during contraction of the ventricles such that the increase in volume in the atria compensated for the decrease in volume in the ventricles. The myocardium were modeled to represent shortening of muscle fibers during contraction with the base of the ventricles moving towards a static apex. The apical region was modeled with moderate wall thinning present while myocardial mass was conserved. To test the applicability of the dynamic heart model, myocardial wall thickening was measured using maximum counts and full width half maximum measurements, and compared with published trends. An analytical 3D projector, with attenuation and detector response included, was used to generate radionuclide projection data sets. After reconstruction a linear relationship was obtained between maximum myocardial counts and myocardium thickness, similar to published results. A numeric difference in values from different locations exist due to different amounts of attenuation present. Similar results were obtained for FWHM measurements. Also, a hot apical region on the polar maps without attenuation compensation turns into an apical defect with attenuation compensation. The apical decrease was more prominent in ED than ES due to the change in the partial volume effect. Both of these agree with clinical trends. It is concluded that the dynamic MCAT (dMCAT) phantom can be used to study the influence of various physical parameters on radionuclide perfusion imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.