Since the introduction of model-based iterative reconstruction for computed tomography (CT) by Thibault et al. in 2007, statistical weights play an important role in the problem formulation with the objective to improve image quality. Statistical weights depend on the variance of measurements. However, this variance is not known and therefore weights must be estimated. So far, the literature neither discusses how statistical weights should be estimated nor how accurate the estimation needs to be. Our submission aims at filling this gap in the literature. Specifically, we propose an estimation procedure for statistical weights and assess this procedure with real CT data. The estimated weights are compared against (clinically unpractical) sample weights obtained from repeated scans. The results show that the estimation procedure delivers reliable results for the rays that pass through the scanned object. Four imaging scenarios are considered; in each case the root mean square difference between the estimated and sample weights is below 5% of the maximum statistical weight value. When used for reconstruction, these differences are seen to have little impact: all voxel values within soft tissue (low contrast) regions differ by less than 1 HU. Our results demonstrate that the statistical weights can be sufficiently well estimated to closely approach the result that would be obtained if the weights were known.
Model-based iterative CT reconstruction suffers from its high computational cost. To increase the reconstruction speed, algorithms using a momentum acceleration can be combined with the ordered subset approach. As shown in previous works, the incorporation of ordered subsets strongly improves the reconstruction time but makes the outcome of the reconstruction difficult to predict due to its empirical nature. Specifically, when the number of subsets is too high, divergence or convergence to an unsatisfactory result can easily appear. In this work, we propose a new combination of ordered subsets with momentum to achieve fast reconstruction in a more robust manner. Our approach, referred to as EOS-MFISTA, is an Efficient Ordered Subset strategy based on the steps of MFISTA, a monotonic version of FISTA. In short, EOS-MFISTA uses objective function values to decide on the next update step and does this with little increase in computational effort. The performance of EOS-MFISTA is evaluated on real CT data of a head phantom and a human chest. Starting from a zero image, 200 iterations of EOS-MFISTA are calculated and compared with other algorithms. Quantitative analyses based on the RMSE show that EOS-MFISTA is much more robust than OS-FISTA, especially when the number of subsets is increased to accelerate reconstruction. Visual inspection of results obtained with 50 iterations further supports these findings.
Because total variation (TV) is non-differentiable, iterative reconstruction using a TV penalty comes with technical difficulties. To avoid these, it is popular to use a smooth approximation of TV instead, which is defined using a single parameter, herein called δ, with the convention that the approximation is better when δ is smaller. To our knowledge, it is not known how well image reconstruction with this approximation can approach a converged non-smooth TV-regularized result. In this work, we study this particular question in the context of X-ray computed tomography (CT). Experimental results are reported with real CT data of a head phantom and supplemented with a theoretical analysis. To address our question, we make proficient use of a generalized iterative soft-thresholding algorithm that allows us to handle TV and its smooth approximation in the same framework. Our results support the following conclusions. First, images reconstructed using the smooth approximation of TV appears to smoothly converge towards the TV result as δ tends to zero. Second, the value of δ does not need to be overly small to obtain a result that is essentially equivalent to TV, implying that numerical instabilities can be avoided. Last, though it is smooth, the convergence with δ is not particularly fast, as the mean absolute pixel difference decreases only as √δ in our experiments. Altogether, we conclude that the approximation is a theoretically valid way to approximate the non-smooth TV penalty for CT, opening the door to safe utilization of a wide variety of optimization algorithms.
An ultrahigh-resolution (UHR) data collection mode was enabled on a whole-body, research photon counting detector (PCD) computed tomography system. In this mode, 64 rows of 0.45 mm×0.45 mm detector pixels were used, which corresponded to a pixel size of 0.25 mm×0.25 mm at the isocenter. Spatial resolution and image noise were quantitatively assessed for the UHR PCD scan mode, as well as for a commercially available UHR scan mode that uses an energy-integrating detector (EID) and a set of comb filters to decrease the effective detector size. Images of an anthropomorphic lung phantom, cadaveric swine lung, swine heart specimen, and cadaveric human temporal bone were qualitatively assessed. Nearly equivalent spatial resolution was demonstrated by the modulation transfer function measurements: 15.3 and 20.3 lp/cm spatial frequencies were achieved at 10% and 2% modulation, respectively, for the PCD system and 14.2 and 18.6 lp/cm for the EID system. Noise was 29% lower in the PCD UHR images compared to the EID UHR images, representing a potential dose savings of 50% for equivalent image noise. PCD UHR images from the anthropomorphic phantom and cadaveric specimens showed clear delineation of small structures.
Photon counting detector (PCD)-based computed tomography (CT) is an emerging imaging technique. Compared to conventional energy integrating detector (EID)-based CT, PCD-CT is able to exclude electronic noise that may severely impair image quality at low photon counts. This work focused on comparing the noise performance at low doses between the PCD and EID subsystems of a whole-body research PCD-CT scanner, both qualitatively and quantitatively. An anthropomorphic thorax phantom was scanned, and images of the shoulder portion were reconstructed. The images were visually and quantitatively compared between the two subsystems in terms of streak artifacts, an indicator of the impact of electronic noise. Furthermore, a torso-shaped water phantom was scanned using a range of tube currents. The product of the noise and the square root of the tube current was calculated, normalized, and compared between the EID and PCD subsystems. Visual assessment of the thorax phantom showed that electronic noise had a noticeably stronger degrading impact in the EID images than in the PCD images. The quantitative results indicated that in low-dose situations, electronic noise had a noticeable impact (up to a 5.8% increase in magnitude relative to quantum noise) on the EID images, but negligible impact on the PCD images.
Photon-counting CT (PCCT) is an emerging technique that may bring new possibilities to clinical practice. Compared to
conventional CT, PCCT is able to exclude electronic noise that may severely impair image quality at low photon counts.
This work focused on assessing the low-dose performance of a whole-body research PCCT scanner consisting of two
subsystems, one equipped with an energy-integrating detector, and the other with a photon-counting detector. Evaluation
of the low-dose performance of the research PCCT scanner was achieved by comparing the noise performance of the
two subsystems, with an emphasis on examining the impact of electronic noise on image quality in low-dose situations.
Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become
an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence
that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or
improving image quality. One important issue associated with this iterative image reconstruction concept is
slow convergence and the associated computational effort. For this reason, there is interest in finding methods
that produce approximate versions of the targeted image with a small number of iterations and an acceptable
level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in
combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce,
within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain
the noise properties of the targeted image.
A high-resolution (HR) data collection mode has been introduced to a whole-body, research photon-counting-detector
CT system installed in our laboratory. In this mode, 64 rows of 0.45 mm x 0.45 mm detector pixels were used, which
corresponded to a pixel size of 0.25 mm x 0.25 mm at the iso-center. Spatial resolution of this HR mode was quantified
by measuring the MTF from a scan of a 50 micron wire phantom. An anthropomorphic lung phantom, cadaveric swine
lung, temporal bone and heart specimens were scanned using the HR mode, and image quality was subjectively assessed
by two experienced radiologists. High spatial resolution of the HR mode was evidenced by the MTF measurement, with
15 lp/cm and 20 lp/cm at 10% and 2% modulation. Images from anthropomorphic phantom and cadaveric specimens
showed clear delineation of small structures, such as lung vessels, lung nodules, temporal bone structures, and coronary
arteries. Temporal bone images showed critical anatomy (i.e. stapes superstructure) that was clearly visible in the PCD
system. These results demonstrated the potential application of this imaging mode in lung, temporal bone, and vascular
imaging. Other clinical applications that require high spatial resolution, such as musculoskeletal imaging, may also
benefit from this high resolution mode.
The energy resolving capabilities of Photon Counting Detectors (PCD) in Computed Tomography (CT) facilitate energy-sensitive measurements. The provided image-information can be processed with Dual Energy and Multi Energy algorithms. A research PCD-CT firstly allows acquiring images with a close to clinical configuration of both the X-ray tube and the CT-detector. In this study, two algorithms (Material Decomposition and Virtual Non-Contrast-imaging (VNC)) are applied on a data set acquired from an anesthetized rabbit scanned using the PCD-CT system. Two contrast agents (CA) are applied: A gadolinium (Gd) based CA used to enhance contrasts for vascular imaging, and xenon (Xe) and air as a CA used to evaluate local ventilation of the animal's lung. Four different images are generated: a) A VNC image, suppressing any traces of the injected Gd imitating a native scan, b) a VNC image with a Gd-image as an overlay, where contrast enhancements in the vascular system are highlighted using colored labels, c) another VNC image with a Xe-image as an overlay, and d) a 3D rendered image of the animal's lung, filled with Xe, indicating local ventilation characteristics. All images are generated from two images based on energy bin information. It is shown that a modified version of a commercially available dual energy software framework is capable of providing images with diagnostic value obtained from the research PCD-CT system.
Photon counting detectors in computed tomography facilitate measurements of spectral distributions of detected X-ray quanta in discrete energy bins. Along with the dependency on wavelength and atomic number of the mass attenuation coefficient, this information allows for reconstruction of CT images of different material bases. Decomposition of two materials is considered standard in today’s dual-energy techniques. With photon-counting detectors the decomposition of more than two materials becomes achievable. Efficient detection of CT-typical X-ray spectra is a hard requirement in a clinical environment. This is fulfilled by only a few sensor materials such as CdTe or CdZnTe. In contrast to energy integrating CT-detectors, the pixel dimensions must be reduced to avoid pulse pile-up problems at clinically relevant count rates. However, reducing pixel sizes leads to increased K-escape and charge sharing effects. As a consequence, the correlation between incident and detected X-ray energy is reduced. This degradation is quantified by the detector response function. The goal of this study is to improve the achievable material decomposition by adapting the incident X-ray spectrum with respect to the properties (i.e. the detector response function) of a photon counting detector. A significant improvement of a material decomposition equivalent metric is achievable when using specific materials as X-ray pre-filtration (K-edge filtering) while maintaining the applied patient dose and image quality.
Iterative reconstruction methods have become an important research topic in X-ray computed tomography (CT), due to their ability to yield improvements in image quality in comparison with the classical filtered bacprojection method. There are many ways to design an effective iterative reconstruction method. Moreover, for each design, there may be a large number of parameters that can be adjusted. Thus, early assessment of image quality, before clinical deployment, plays a large role in identifying and refining solutions. Currently, there are few publications reporting on early, task-based assessment of image quality achieved with iterative reconstruction methods. We report here on such an assessment, and we illustrate at the same time the importance of the grayscale used for image display when conducting this type of assessment. Our results further support observations made by others that the edge preserving penalty term used in iterative reconstruction is a key ingredient to improving image quality in terms of detection task. Our results also provide a clear demonstration of an implication made in one of our previous publications, namely that the grayscale window plays an important role in image quality comparisons involving iterative CT reconstruction methods.
Over the last few years, iterative reconstruction methods have become an important research topic in x-ray CT imaging. This effort is motivated by increasing evidence that such methods may enable significant savings in terms of dose imparted to the patient. Conceptually, iterative reconstruction methods involve two important ingredients: the statistical model, which includes the forward projector, and a priori information in the image domain, which is expressed using a regularizer. Most often, the image pixel size is chosen to be equal (or close) to the detector pixel size (at field-of-view center). However, there are applications for which a smaller pixel size is desired. In this investigation, we focus on reconstruction with a pixel size that is twice smaller than the detector pixel size. Using such a small pixel size implies a large increase in computational effort when using the distance-driven method for forward projection, which models the detector size. On the other hand, the more efficient method of Joseph will create imbalances in the reconstruction of each pixel, in the sense that there will be large differences in the way each projection contributes to the pixels. The purpose of this work is to evaluate the impact of these imbalances on image quality in comparison with utilization of the distance-driven method. The evaluation involves computational effort, bias and noise metrics, and LROC analysis using human observers. The results show that Joseph's method largely remains attractive.
Radiation dose associated with CT scans has become an important concern in medical imaging. Fortunately, there are many pathways to reducing dose. A complicated aspect, however, is to ensure that image quality is not affected while reducing the dose. A preferred method to assess image quality is ROC analysis, possibly with a search process. For early assessment of new imaging solutions, utilization of human observers and real patient data is rarely practical. Instead, studies involving phantoms and model observers are often preferred. We present here an experimental result that sheds light on how the grayscale window affects human observer performance in a typical phantom-based study; and we also present an analysis that clarifies how the grayscale window affects the statistics of the image. These studies provide a better understanding of possible consequences associated with not including a grayscale window in studies with model observers, as is typical.
Statistical iterative image reconstruction has become the subject of strong, active research in X-ray computed
tomography (CT), primarily because it may allow a significant reduction in dose imparted to the patient. However,
developing an algorithm that converges fast while allowing parallelization so as to obtain a product that
can be used routinely in the clinic is challenging. In this work, we present a novel algorithm that combines the
strength of two popular methods. A preliminary investigation of this algorithm was performed, and strongly
encouraging initial results are reported.
Task-based image quality assessment is a valuable methodology for development, optimization and evaluation of new image formation processes in x-ray computed tomography (CT), as well as in other imaging modalities. A simple way to perform such an assessment is through the use of two (or more) alternative forced choice (AFC) experiments. In this paper, we are interested in drawing statistical inference from outcomes of multiple AFC experiments that are obtained using multiple readers as well as multiple cases. We present a non-parametric covariance estimator for this problem. Then, we illustrate its usefulness with a practical example involving x-ray CT simulations. The task for this example is classification between presence or absence of one lesion with unknown location within a given object. This task is used for comparison of three standard image reconstruction algorithms in x-ray CT using four human observers.
The forward projection operator is a key component of every iterative reconstruction method in X-ray computed
tomography (CT). Besides the choices being made in the definition of the objective function and associated
constraints, the forward projection model affects both bias and noise properties of the reconstruction.
In this work, we compare three important forward projection models that rely on linear interpolation: the
Joseph method, the distance-driven method, and the image representation using B-splines of order n = 1. The
comparison focuses on bias and noise in the image as a function of the resolution. X-ray CT data that are
simulated in fan-beam geometry with two different magnification factors are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.