Robotic CT is a novel imaging platform built on two manipulators with great flexibility and convenience. But it suffers from limited mechanical motion accuracy, which brings artifacts into images. Acquiring true geometry parameters is critical for accurate image reconstruction. While it’s impractical to monitor all geometry positions in practice. Down-sampling the projection number and using sparsity reconstruction offers a feasible way of solving this problem. Score-based generative model (SGM) is a powerful generative model able to produce directional samples guided by prior information. Through combining prior data and generated data, images quality can be significantly improved. In this work, we trained a score-based generative model using images from two real CT scan datasets. In sampling of score-based net, prior sparsity projection was added through cone-beam projection and image reconstruction. Full projection under non-standard geometry was simulated by adding deviation into standard circular geometry. We compared performance of several algorithms on sparsity and full data, under true and ideal geometry. Image was evaluated by typical indexes and visible details. Images of SART under ideal geometry showed severe artifacts, with strip artifacts in soft tissue. Compared with images under ideal geometry, SGM-based sparsity reconstruction showed visual fuzzier image but with higher index, which improved by 59.0% and 41.1% for PSNR and SSIM. Compared with sparsity reconstruction under true geometry using SART, SGM-based method showed clearer image and higher indexes, with 11.4% and 24.7% improvement of PSNR and SSIM. SGM-based sparsity reconstruction showed great potential in sparsity reconstruction under non-standard geometry.
Spectral Photon-Counting CT (SPCCT) has become the focus of researchers, because of its energy resolution ability. SPCCT has broad application prospects, especially in the field of medical imaging. In medicine, iodized oil is often used as an antitumor drug, and its diffusion and retention time in tumors greatly affect the effect of tumor treatment. It's becoming an increasing concern for researchers, therefore, it is necessary to quantify the distribution and content of iodized oil in the tumor. In this study, the distribution and content of iodized oil in tumors of living mice were quantitatively analyzed with SPCCT, and an effective and convenient method based on SPCCT was established. First, using iodized oil and water as the base materials, the method of material decomposition was carried out to obtain the decomposed image of iodized oil. Then the content of iodized oil was calculated from the decomposed images, and the change of iodized oil content with time was quantitatively analyzed. The experimental results show that the quantification method of iodized oil based on SPCCT established in this study could effectively quantify the change in the distribution and content of iodized oil over a long period, indicating the potential of SPCCT in the application of drug quantitative analysis.
The high utility and wide applicability of x-ray imaging has led to a rapidly increased number of CT scans over the past
years, and at the same time an elevated public concern on the potential risk of x-ray radiation to patients. Hence, a hot
topic is how to minimize x-ray dose while maintaining the image quality. The low-dose CT strategies include modulation
of x-ray flux and minimization of dataset size. However, these methods will produce noisy and insufficient projection
data, which represents a great challenge to image reconstruction. Our team has been working to combine statistical
iterative methods and advanced image processing techniques, especially dictionary learning, and have produced
excellent preliminary results. In this paper, we report recent progress in dictionary learning based low-dose CT
reconstruction, and discuss the selection of regularization parameters that are crucial for the algorithmic optimization.
The key idea is to use a “balancing principle” based on a model function to choose the regularization parameters during
the iterative process, and to determine a weight factor empirically for address the noise level in the projection domain.
Numerical and experimental results demonstrate the merits of our proposed reconstruction approach.
As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to
human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose
case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the
insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the
smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures
effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed
the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image.
In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In
the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand,
patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch
based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the
proposed method. The experimental results validate this method can lead to better image with less noise and more detail
than other methods in low-count and few-views cases.
Statistical CT reconstruction using penalized weighted least-squares(PWLS) criteria can improve image-quality in low-dose CT reconstruction. A suitable design of regularization term can benefit it very much. Recently, sparse representation based on dictionary learning has been treated as the regularization term and results in a high quality reconstruction. In this paper, we incorporated a multiscale dictionary into statistical CT reconstruction, which can keep more details compared with the reconstruction based on singlescale dictionary. Further more, we
exploited reweigted l1 norm minimization for sparse coding, which performs better than I norm minimization
in locating the sparse solution of underdetermined linear systems of equations. To mitigate the time consuming process that computing the gradiant of regularization term, we adopted the so-called double surrogates method to accelerate ordered-subsets image reconstruction. Experiments showed that combining multiscale dictionary and reweighted l1 norm minimization can result in a reconstruction superior to that bases on singlescale dictionary and l1 norm minimization.
In medical x ray computed tomography (CT) imaging devices, the x ray tube usually emits a polychromatic spectrum of photons resulting in beam-hardening artifacts in the reconstructed images. The bone-correction method has been widely adopted to compensate for beam-hardening artifacts. However, its correction performance is highly dependent on the empirical determination of a scaling factor, which is used to adjust the ratio of the reconstructed value in the bone region to the actual mass density of bone-tissue. A significant problem with bone-correction is that a large number of physical experiments are routinely required to accurately calibrate the scaling factor. In this article, an improved bone-correction method is proposed, based on the projection data consistency condition, to automatically determine the scaling factor. Extensive numerical simulations have verified the existence of an optimal scaling factor, the sensitivity of bone-correction to the scaling factor, and the efficiency of the proposed method for the beam-hardening correction.
This paper presents a statistical interior tomography approach combining an optimization of the truncated Hilbert
transform (THT) data. With the introduction of the compressed sensing (CS) based interior tomography, a statistical
iteration reconstruction (SIR) regularized by the total variation (TV) has been proposed to reconstruct an interior region
of interest (ROI) with less noise from low-count local projections. After each update of the CS based SIR, a THT
constraint can be incorporated by an optimizing strategy. Since the noisy differentiated back-projection (DBP) and its
corresponding noise variance on each chord can be calculated from the Poisson projection data, an object function is
constructed to find an optimal THT of the ROI from the noisy DBP and the present reconstructed image. Then the
inversion of this optimized THT on each chord is performed and the resulted ROI will be the initial image of next update
for the CS based SIR. In addition, a parameter in the optimization of THT step can be used to determine the stopping rule
of the iteration heuristically. Numerical simulations are performed to evaluate the proposed approach. Our results
indicate that this approach can reconstruct an ROI with high accuracy by reducing the noise effectively.
While classic CT theory targets exact reconstruction of a whole cross-section or of an entire object from complete
projections, practical applications often focus on much smaller internal ROIs. Traditional CT methods cannot exactly
reconstruct an internal ROI only from local truncated projections associated with x-rays through the ROI, because this
interior problem does not have a unique solution. When applying approximate local CT algorithms for interior
reconstruction from truncated projection data, features outside the ROI may create artifacts overlapping inside features,
rendering the images inaccurate or useless. Recently, novel solutions for the interior problem were published by our
group with numerical results demonstrating that the interior problem can be solved in a theoretically exact and
numerically stable fashion aided by some prior knowledge, such as a small known sub-region inside the ROI or the ROI
can modeled by piecewise constant/polynomial function. In this invited paper, we will review the recent progress in local
reconstruction. The topic includes lambda tomography, analytic and iterative interior reconstructions, with an emphasis
on total variation minimization based soft-threshold methods and statistical based interior reconstruction algorithms.
The long-standing interior problem has been recently revisited, leading to promising results on exact local reconstruction
also referred to as interior tomography. To date, there are two key computational ingredients of interior tomography. The
first ingredient is inversion of the truncated Hilbert transform with prior sub-region knowledge. The second is
compressed sensing (CS) assuming a piecewise constant or polynomial region of interest (ROI). Here we propose a
statistical approach for interior tomography incorporating the aforementioned two ingredients as well. In our approach,
projection data follows the Poisson model, and an image is reconstructed in the maximum a posterior (MAP) framework
subject to other interior tomography constraints including known subregion and minimized total variation (TV). A
deterministic interior reconstruction based on the inversion of the truncated Hilbert transform is used as the initial image
for the statistical interior reconstruction. This algorithm has been extensively evaluated in numerical and animal studies
in terms of major image quality indices, radiation dose and machine time. In particular, our encouraging results from a
low-contrast Shepp-Logan phantom and a real sheep scan demonstrate the feasibility and merits of our proposed
statistical interior tomography approach.
This paper presents a statistical reconstruction algorithm for dual-energy (DE) CT of polychromatic x-ray source. Each
pixel in the imaged object is assumed to be composed of two basis materials (i.e., bone and soft tissue) and a penalizedlikelihood
objective function is developed to determine the densities of the two basis materials. Two penalty terms are
used respectively to penalize the bone density difference and the soft tissue density difference in neighboring pixels. A
gradient ascent algorithm for monochromatic objective function is modified to maximize the polychromatic penalizedlikelihood
objective function using the convexity technique. In order to reduce computation consumption, the
denominator of the update step is pre-calculated with reasonable approximation replacements. Ordered-subsets method is
applied to speed up the iteration. Computer simulation is implemented to evaluate the penalized-likelihood algorithm.
The results indicate that this statistical method yields the best quality image among the tested methods and has a good
noise property even in a lower photon count.
KEYWORDS: Monte Carlo methods, Luminescence, Charge-coupled devices, Bioluminescence, Tomography, Tissue optics, Fluorescence tomography, Computer simulations, Scattering, In vivo imaging
Optical sensing of specific molecular target using near-infrared light has been recognized to be the crucial technology,
have changing human's future. The imaging of Fluorescence Molecular Tomography is the most novel technology in
optical sensing. It uses near-infrared light(600-900nm) as instrument and utilize fluorochrome as probe to take noncontact
three-dimensional imaging for live molecular targets and to exhibit molecular process in vivo. In order to solve
the problem of forward simulation in FMT, this paper mainly introduces a new simulation modeling. The modeling
utilizes Monte Carlo method and is implemented in C++ programming language. Ultimately its accuracy has been
testified by comparing with analytic solutions and MOSE from University of Iowa and Chinese Academy of Science.
The main characters of the modeling are that it can simulate both of bioluminescent imaging and FMT and take analytic
calculation and support more than one source and CCD detector simultaneously. It can generate sufficient and proper
data and pre-preparation for the study of fluorescence molecular tomography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.