Sparse-view computed tomography (CT) has great potential in reducing radiation dose and accelerating the scan process. Although deep learning (DL) methods have exhibited promising results in mitigating streaking artifacts caused by very few projections, their generalization remains a challenge. In this work, we proposed a DL-driven alternative Bayesian reconstruction method that efficiently integrates data-driven priors and the data consistency constraints. This methodology involves two stages: universal embedding and consistency adaptation respectively. In the embedding stage, we optimize DL parameters to learn and eliminate the general sparse-view artifacts on a large-scale paired dataset. In the subsequent consistency adaptation stage, an alternative Bayesian reconstruction further optimizes the DL parameters according to individual projection data. Our proposed technique is validated within both image-domain and dual-domain DL frameworks leveraging simulated sparse-view (90 views) projections. The results underscore the superior generalization and context structure recovery of our approach compared to networks solely trained via supervised loss.
Computer tomography (CT) imaging is an essential diagnostic tool in clinical practice. Because of its radioactive nature. It is required to minimize the dose delivered to the patient. Currently, in clinical settings, the adjustment of radiation dose for CT imaging primarily relies on parameters such as tube voltage and tube current, which are often adjusted based on experience, leading to potential unreliability and instability. In this work, we propose a reinforcement learning (RL) based approach for tuning the tube current and voltage according to the principle of As Low As Reasonably Achievable (ALARA). Our method involves the development of an automatic parameter adjustment network (APAN) to determine the optimal policy for parameter adjustment. In this primary study, APAN is trained in a simulation environment and images are reconstructed by the Feldkamp-Davis-Kress (FDK) method, the experiments demonstrate its ability to optimize the parameters to obtain a better dose distribution than a uniform or energy absorption based distribution.
The spectral inconsistency across detector pixels is still a major challenge that could directly lead to obvious artifacts in reconstructed CT images and severe inaccuracies in material discrimination. This work attempts to develop a novel approach of modeling the energy threshold induced spectral inconsistency for photon-counting CT (PCCT) to adaptively characterize and correct for spectral effects. This proposed method is based on the general model expression of photon counts, adds variables to the integration term to represent spectral distortion and furthermore takes a threshold-related nonuniformity bias factor into account at integration bounds. In this work, we assume that the variables associated with spectral distortion are mainly related to the system geometry and the detector material, there is only threshold-related nonuniformity bias factor to change with the energy-threshold, regardless of some non-ideal effects like pileup and charge sharing. After simplification, we solve the linear equations to get the threshold-related nonuniformity bias at different thresholds, and the corrected photon counts can be calculated together with the spectral distortion variables have been worked out at the first reference threshold. Experimental results showed that the ring and band artifacts caused by spectral inconsistency were significantly reduced, the quality of reconstructed images is visibly improve. And the universality and robustness of our method are also demonstrated preliminary.
Regularization is an essential term to suppress noise and artifacts in iterative reconstruction algorithms. The total variation (TV) is one of the most successful regularizations which can eliminate noise and streak artifacts while preserving edges. However, strong TV regularization usually produces staircase artifacts if the image contains non-constant regions. Recently, the mean curvature (MC) regularization based on the geometry-driven diffusion model has been proposed to solve image processing problems. In an aspect of geometry, minimizing the mean curvature to zero derives a linear surface. In this paper, we develop a linear convolution approximated mean curvature based on the local geometric properties of surfaces embedded in 3D space. We adopted the half window kernel technique and formulated a novel edge-preserving mean curvature (EPMC) regularization. Compared with the traditional curvature diffusion model that employs secondorder partial derivatives, the proposed method is efficient and concise. The ADMM algorithm can solve the optimization problem. We conducted a simulation study and compared the proposed EPMC regularization with TV. The results demonstrate that the proposed method is superior in noise suppression and edge preservation in linear regions while avoiding staircase artifacts.
Image mean and covariance required for a model observer are usually calculated by the statistical method using image samples, which is hard to acquire in reality. Although some analytical methods are proposed to estimate image covariance from a single projection, these methods are of high computational cost for large-dimensional images (e.g., 512×512), and images of large dimension are commonly required. Considering the covariance used for a model observer is the covariance of the channel response, whose dimension is much smaller than the image covariance, we aim to obtain the covariance of small-dimensional channel response directly from its projection. Channel filters are applied to the analytical projection to image (Prj2Img) covariance estimation method to derive the analytical projection to channel response (Prj2CR) covariance estimation method, which successfully reduces the computational cost and connects the covariance of projection and channel response. In addition, a transition matrix is introduced in Prj2CR method to stabilize the connections. The transition matrix mainly depends on channel filters, not the system, phantom, and reconstruction algorithm, which means it can be calibrated by small-dimensional reconstructions and then applied to any situation with a same channel filter. We validate the feasibility and utility of the proposed Prj2CR method by simulations. 128×128 reconstructions from qGGMRF-WLS are adopted for calibration, while 512×512 reconstructions are used for validation. SNR of CHO is chosen as the figure of merit for performance evaluation, and the covariance estimated by 290 image samples are used as the reference. Results show that the SNR by the Prj2CR method is within 95% confidence interval of the SNR* by 290 image samples, indicating that the proposed method accords with statistical method. The Prj2CR method may be beneficial for subjective image quality assessment since it only needs a single sample of projection and has low computational cost.
The radiation risk of X-ray CT gained increasing concern in the past decades. Lowering CT scan dose leads to noisy raw data as well as streak artifacts after reconstruction. Extensive studies have been conducted to reduce noise and artifacts for low-dose CT (LDCT). As deep learning has achieved great success in computer vision tasks, it also become a powerful tool in LDCT denoising. Commonly used deep learning methods such as supervised learning and generative adversarial learning have a strong dependence on large normal-dose CT (NDCT) dataset. While in real cases, the NDCT dataset is often expensive or not accessible, which limits the implementation of deep learning. In recent studies, multiple deep learning methods have been proposed for LDCT denoising without NDCT data. Among them, a popular type of methods is noisy label training (NLT) which use LDCT data as labels for network supervised training. Noise2Void is an easily implementable and representative method of NLT and has achieved great results in pixel-independent noise denoising. Another type is distribution learning methods which reduce LDCT noise-level by learning NDCT distribution. Deep distribution learning from noisy samples (DDLN) learns the NDCT distribution from LDCT data only and adopts MAP estimation for LDCT denoising with the learned NDCT distribution prior. It is effective for LDCT projection data denoising. In this work, the two representative methods are compared for LDCT projection data denoising under different noise-levels to seek for their suitable application scenarios.
In conventional computed tomography (CT) system, generally, the X-ray source moves along a circular or spiral trajectory to achieve volume coverage. However, the gantry rotation increases the manufacturing complexity and dominantly limits the temporal resolution. Recently, a new concept of symmetric-geometry computed tomography (SGCT) was explored, where the sources and detectors are linearly distributed in a stationary configuration. The movements of source and detector are no longer needed in the data acquisition of SGCT, which has the advantage of increasing the scanning speed and simplifying the system construction. In this work, we investigate the there-dimensional (3D) image reconstruction of SGCT, in which the special scanning trajectory is of interest (i.e., tilting straight-line scan). Based on the analysis of imaging geometry and projection data representation, a tilting straight-line analytic reconstruction (TSLA) method is proposed for 3D tomography. The preliminary results of 3D simulated phantoms show that TSLA algorithm for SGCT can reach a reconstruction accuracy which is comparable to that of the helical multidetector CT using PI-original method. On the other hand, with no rotation involved, SGCT can offer fast CT scan and it has the potential in many 3D tomography applications where scanning speed is critical.
Helical CT has been widely used in clinical diagnosis. Sparsely spaced multidetector in z direction can increase the coverage of the detector provided limited detector rows. It can speed up volumetric CT scan, lower the radiation dose and reduce motion artifacts. However, it leads to insufficient data for reconstruction. That means reconstructions from general analytical methods will have severe artifacts. Iterative reconstruction methods might be able to deal with this situation but with the cost of huge computational load. In this work, we propose a cascaded dual-domain deep learning method that completes both data transformation in projection domain and error reduction in image domain. First, a convolutional neural network (CNN) in projection domain is constructed to estimate missing helical projection data and converting helical projection data to 2D fan-beam projection data. This step is to suppress helical artifacts and reduce the following computational cost. Then, an analytical linear operator is followed to transfer the data from projection domain to image domain. Finally, an image domain CNN is added to improve image quality further. These three steps work as an entirety and can be trained end to end. The overall network is trained using a simulated lung CT dataset with Poisson noise from 25 patients. We evaluate the trained network on another three patients and obtain very encouraging results with both visual examination and quantitative comparison. The resulting RRMSE is 6.56% and the SSIM is 99.60%. In addition, we test the trained network on the lung CT dataset with different noise level and a new dental CT dataset to demonstrate the generalization and robustness of our method.
Substantial researches have shown that the wildly used statistical iterative reconstruction (SIR) methods without strong constraints, for example, the maximum likelihood estimation, could induce excessive noise into reconstructions. The noise significantly degrades image quality. In this case, the traditional method of iterating till convergence is no longer feasible. In this work, we propose a structural similarity index (SSIM) based stopping criterion for SIR. We define an indicator, referred as mSSIM, of the turning point of noise amplification based on SSIM map of reconstructed images from two adjacent iterations. The mSSIM is computed from the average of SSIM map within regions of interest (ROI). A threshold of the mSSIM is set to be the stopping criterion of iterative reconstruction. We applied this strategy to the cases of two different data noise models and iterative step sizes. Experimental tests are done on two practical datasets. Result shows that we could successfully and stably obtain images of similar quality by applying this SSIM-based stopping criterion in different cases.
Wireless Capsule Endoscopy (WCE) enables physicians to examine gastrointestinal (GI) tract without surgery. It has become a widely used diagnostic technique while the huge image data brings heavy burden to doctors. As a result, computer-aided diagnosis systems that can assist doctors as a second observer gain great research interest. In this paper, we aim to demonstrate the feasibility of deep learning for lesion recognition. We propose a Second Glance framework for ulcer detection and verified its effectiveness and robustness on a large ulcer WCE dataset (largest one to our knowledge for this problem) which consists of 1,504 independent WCE videos. The performance of our method is compared with off-the-shelf detection frameworks. Our framework achieves the best ROC-AUC of 0.9235 and outperforms the results of RetinaNet (0.8901), Faster-RCNN(0.9038) and SSD-300 (0.8355).
In this study we present a novel contrast-medium anisotropy-aware TTV (Cute-TTV) model to reflect intrinsic sparsity configurations of a cerebral perfusion Computed Tomography (PCT) object. We also propose a PCT reconstruction scheme via the Cute-TTV model to improve the performance of PCT reconstructions in the weak radiation tasks (referred as CuteTTV-RECON). An efficient optimization algorithm is developed for the CuteTTV-RECON. Preliminary simulation studies demonstrate that it can achieve significant improvements over existing state-of-the-art methods in terms of artifacts suppression, structures preservation and parametric maps accuracy with weak radiation.
Inverse-geometry computed tomography (CT) has potential in security inspection and medical applications. In this work, we explore a new concept of IGCT in stationary configuration with linearly distributed source and detector (L-IGCT). To develop an exact analytical reconstruction for L-IGCT, we derive a direct filteredbackprojection (FBP) type algorithm. We validate our method by simulation of a Shepp-Logan head phantom, in which CT images are exactly reconstructed from two L-IGCT scans, whose detector arrays are perpendicular to each other to provide sufficient projection data.
Spectral computed tomography (CT) with photon counting detectors (PCDs) can collect photons by setting different energy bins. It is well acknowledged that PCD-based spectral CT has great potential for lowering radiation dose and improve material discrimination. One critical processing in spectral CT is energy spectrum modelling or spectral information decomposition. In this work, we proposed a dual-domain deep learning (DDDL) method to calibrate a spectral CT system by a neural network. Without explicit energy spectrum and detector response model, we train a neural network to implicitly define the non-linear relationship in spectral CT. Virtual monochromatic attenuation maps are synthesized directly from polychromatic projections. Simulation and real experimental results verified the feasibilities and accuracies of the proposed method.
X-ray imaging with grating interferometry (GI) can obtain additional phase and dark-field contrasts simultaneously with the traditional absorption contrast. Due to higher sensitivity of phase contrast and subpixel spatial resolution of dark-field contrast, this technique has been established as a promising technique for low-density materials imaging. The information retrieval algorithm of three contrasts plays the key role in applications of the technique. The existing algorithms can be divided into two major types, the cosine-model analysis (CMA) method and the small angle x-ray scattering (SAXS) method. However, CMA method is established on the approximate cosine-model assumption and SAXS method requires relatively complicated and time-consuming iteration process of deconvolution. To overcome the aforementioned limitations, we introduce the convolution neural network (CNN) technique for the first time. With collected detector data as the input and retrieved information via SAXS method as the label, we design two CNN architectures. We train every network with 2160 exposure images of 6 breast specimen and test on another 720 images of 2 breast specimen. With structural similarity (SSIM) index as the quantitative standard, the results indicate retrieved images via the much faster CNN algorithms are consistent with SAXS method (best SSIM values are 0.9852, 0.9760 and 0.9006 respectively for absorption, phase and dark-field contrasts).
Absorbed dose distributions in dental and maxillofacial cone beam computed tomography (dental CBCT) are essential to dental CBCT dose indices. Direct measurements by thermoluminescence detectors are laborious. We establish a valid GEANT4 based absorbed dose simulation program with mean deviation being 7.25% compared to experimental measurements. Dental CBCT absorbed dose distributions simulated by this program indicate : The distributions are not always symmetry; Non-symmetry cases are when phantom center is departured from isocenter, half-fan beam with 360° scan angle range and some case full-fan beam with less than 360° scan angle range; Dose index weights for circular symmetry absorbed dose distributions are quite different for fields of view diameter bigger and smaller than phantom diameter.
Spectral computed tomography (SCT) has advantages in multienergy material decomposition for material discrimination and quantitative image reconstruction. However, due to the nonideal physical effects of photon counting detectors, including charge sharing, pulse pileup and K-escape, it is difficult to obtain precise system models in practical SCT systems. Serious spectral distortion is unavoidable, which introduces error into the decomposition model and affects material decomposition accuracy. Recently, neural networks demonstrated great potential in image segmentation, object detection, natural language processing, etc. By adjusting the interconnection relationship among internal nodes, it provides a way to mine information from data. Considering the difficulty in modeling SCT system spectra and the superiority of data-driven characteristics of neural networks, we proposed a spectral information extraction method for virtual monochromatic attenuation maps using a simple fully connected neural network without knowing spectral information. In our method, virtual monochromatic linear attenuation coefficients can be obtained directly through our neural network, which could contribute to further material recognition. Our method also provides outstanding performance on denoising and artifacts suppression. It can be furnished for SCT systems with different settings of energy bins or thresholds. Various substances available can be used for training. The trained neural network has a good generalization ability according to our results. The testing mean square errors are about 1 × 10 − 05 cm − 2.
The penalized weighted least-squares (PWLS) image reconstruction with the widely used edge-preserving nonlocal means (NLM) penalty has shown the potential to significantly improve the image quality for low dose CT (LDCT). Considering the nonlocal weights have significant effects for the smoothness and resolution of the reconstruction, much effort has been made to improve their accuracy. A high quality image of normal dose with less noise and artifacts is sometimes used for the weight’s calculation to further improvement. However, registration should be employed first when misalignment between the low-dose and normal-dose scans cannot be ignored. It will bring an extra work and the effect of registration error on the proposed method are uncertain. The paper aims to propose a new NLM prior model based on normal-dose CT (NDCT) without registration, by predicting nonlocal weights with selecting most similar patch samples from FDCT database. The patch samples are determined by evaluating the similarity between patches from NDCT and the target patch of LDCT. After building up the normal dose based NLM penalty, the PWLS object function is iteratively minimized for reconstruction. Preliminary reconstruction with LDCT data has shown its potential in the structure detail preservation.
Sparse-view CT imaging has been a hot topic in the medical imaging field. By decreasing the number of views, dose delivered to patients can be significantly reduced. However, sparse-view CT reconstruction is an illposed problem. Serious streaking artifacts occur if reconstructed with analytical reconstruction methods. To solve this problem, many researches have been carried out to optimize in the Bayesian framework based on compressed sensing, such as applying total variation (TV) constraint. However, TV or other regularized iterative reconstruction methods are time consuming due to iterative process needed. In this work, we proposed a method of angular resolution recovery in projection domain based on deep residual convolutional neural network (CNN) so that projections at unmeasured views can be estimated accurately. We validated our method by a disjointed data set new to trained networks. With recovered projections, reconstructed images have little streaking artifacts. Details corrupted due to sparse view are recovered. This deep learning based sinogram recovery can be generalized to more data insufficient situations.
Markov random field (MRF) model-based penalty is widely used in statistical iterative reconstruction (SIR) of low dose CT (LDCT) reconstruction for noise suppression and edge-preserving. In this strategy, normal dose CT scans are usually used as a priori information to further improve the LDCT quality. However, repeated CT scans are needed and registration or segmentation is usually applied first when misalignment between the low-dose and normal-dose scans exists. The study aims to propose a new MRF prior model of SIR based on the NDCT database without registration. In the proposed model, MRF weights are predicted using optimal similar patch samples from the NDCT database. The patch samples are determined by evaluating the similarity with Euclidean distance between patches from NDCT and the target patch of LDCT. The proposed prior term is incorporated into the SIR cost function, which is to be minimized for LDCT reconstruction. The proposed method is tested on an artificial LDCT data based on a high-dose patient data. Preliminary result has proved its potential performance in edge and structure detail preservation.
Spectral Computed Tomography (CT) has an advantage of providing energy spectrum information, which is valued on multi-energy material decomposition for material discrimination and accurate image reconstruction. However, due to the non-ideal physical effects of photon counting detectors (PCDs), such as charge sharing, pulse pileup and K-escape, serious spectral distortion is unavoidable in practical systems. The degraded spectrum will induce error into the decomposition model and affect the accuracy of material decomposition. Recently, artificial neural network has demonstrated great potential in the tasks of image segmentation, object detection, natural language processing, and etc. By adjusting the interconnection relationship among a large number of internal nodes, a neural network provides us a way to mine information from huge data depending on the complexity of the network system. Considering the difficulty of modeling the spectral CT system spectrum including the response function of a PCD and the superiority of data-driven characteristics of a neural network, we proposed a novel multi-energy material decomposition method using a neural network without the knowledge of spectral information. On one hand, specific linear attenuation coefficients can be obtained directly through our method. It would help further material recognition and spectral CT reconstruction. On the other hand, the network outputs show outstanding performance on image denoising and artifacts suppression. Our method can fit for different selections of training materials and different settings of imaging systems such as different number of energy bins and energy bin thresholds. According to our test results, the trained neural network has a good generalization ability.
Tremendous research efforts have been devoted to lower the X-ray radiation exposure to the patient in order to expand the utility of computed tomography (CT), particularly to pediatric imaging and population-based screening. When the exposure dosage goes down, both the X-ray quanta fluctuation and the system electronic background noise become significant factors affecting the image quality. Conventional edge-preserving noise smoothing would sacrifice tissue textures and compromise the clinical tasks. To relieve these challenges, this work models the noise problem by pre-log shifted Poisson statistics and extracts tissue textures from previous normal-dose CT scans as prior knowledge for texturepreserving Bayesian reconstruction of current ultralow-dose CT images. The pre-log shift Poisson model considers accurately both the X-ray quanta fluctuation and the system electronic noise while the prior knowledge of tissue textures removes the limitation of the conventional edge-preserving noise smoothing. The Bayesian reconstruction was tested by experimental studies. One patient chest scan was selected from a database of 133 patients’ scans at 100mAs/120kVp normal-dose level. From the selected patient scan, ultralow-dose data was simulated at 5mAs/120kVp level. The other 132 normal-dose scans were grouped according to how close their lung tissue texture patterns are from that of the selected patient scan. The tissue textures of each group were used to reconstruct the ultralow-dose scan by the Bayesian algorithm. The closest group to the selected patient produced almost identical results to the reconstruction when the tissue textures of the selected patient’s normal-dose scan were used, indicating the feasibility of extracting tissue textures from a previous normal-dose database to reconstruct any current ultralow-dose CT image. Since the Bayesian reconstruction can be time consuming, this work further investigates a strategy to efficiently store the projection matrix rather than computing the line integrals on-flight. This strategy accelerated the computing speed by more than 18 times.
Purpose: Our preliminary study showed us the capability of a deep learning neural network (DLNN) based method to eliminate a specific type of artifact in CT images. This work is to comprehensively study the applicability of a U-net CNN architecture in improving the image quality of CT reconstructions by respectively testing its performance in various artifact removal tasks. Methods: A U-net architecture is trained by a big dataset of contaminated and expected image pairs. The expected images known as reference images are acquired from groundtruths or using superior imaging system. A proper initialization of network parameters, a careful normalization of original data and a residual learning objective are incoorprated into the framework to boost training convergence. Both numerical and real data studies are conducted to validate this method. Results: In numerical studies, we found that the DLNN-based artifact reduction is powerful and can work well in reducing nearly all type artifacts and recover detailed structrual information in low-quliaty images (e.g. plain FBP reconstructions) if the network is trained with groundtruths provided. In real situations where groundtruth is not available, the proposed method can characterize the discrepancy between contaminated data and higher-quality reference labels produced by other techniques, mimicking their capability of reducing artifacts. Generalization to disjointed data is also examined using testing data. All results show that the DLNN framework can be applied to various artifact reduction tasks and outperforms conventional methods with shorter runtime. Conclusion: This work gained promising results of the U-net network architecture successfully characterizing both global and local artifact patterns. By forward propagating contaminated images through the trained network, undesired artifacts can be greatly reduced while structrual information maintained for an input of CT image. It should be noted that the proposed deep network should be trained independently for each specific case.
KEYWORDS: Digital signal processing, Sensors, Electronics, X-ray imaging, Signal processing, X-ray detectors, Inspection, Analog electronics, Semiconductors, X-rays
A multi-purpose readout electronics based on the DPLMS digital filter has been developed for CdTe and CZT detectors for X-ray imaging applications. Different filter coefficients can be synthesized optimized either for high energy resolution at relatively low counting rate or for high rate photon-counting with reduced energy resolution. The effects of signal width constraints, sampling rate and length were numerical studied by Mento Carlo simulation with simple CRRC shaper input signals. The signal width constraint had minor effect and the ENC was only increased by 6.5% when the signal width was shortened down to 2 τc. The sampling rate and length depended on the characteristic time constants of both input and output signals. For simple CR-RC input signals, the minimum number of the filter coefficients was 12 with 10% increase in ENC when the output time constant was close to the input shaping time. A prototype readout electronics was developed for demonstration, using a previously designed analog front ASIC and a commercial ADC card. Two different DPLMS filters were successfully synthesized and applied for high resolution and high counting rate applications respectively. The readout electronics was also tested with a linear array CdTe detector. The energy resolutions of Am-241 59.5 keV peak were measured to be 6.41% in FWHM for the high resolution filter and to be 13.58% in FWHM for the high counting rate filter with 160 ns signal width constraint.
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Dual-energy CT (DECT) imaging has gained a lot of attenuation because of its capability to discriminate materials. We proposes a flexible DECT scan strategy which can be realized on a system with general X-ray sources and detectors. In order to lower dose and scanning time, our DECT acquires two projections data sets on two arcs of limited-angular coverage (one for each energy) respectively. Meanwhile, a certain number of rays from two data sets form conjugate sampling pairs. Our reconstruction method for such a DECT scan mainly tackles the consequent limited-angle problem. Using the idea of artificial neural network, we excavate the connection between projections at two different energies by constructing a relationship between the linear attenuation coefficient of the high energy and that of the low one. We use this relationship to cross-estimate missing projections and reconstruct attenuation images from an augmented data set including projections at views covered by itself (projections collected in scanning) and by the other energy (projections estimated) for each energy respectively. Validated by our numerical experiment on a dental phantom with rather complex structures, our DECT is effective in recovering small structures in severe limited-angle situations. This DECT scanning strategy can much broaden DECT design in reality.
In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM).
We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.
This work gives a new Compressed Sensing (CS) based Computed Tomography (CT) reconstruction method for limited angle problem. Currently CS based reconstruction methods are achieved by a minimizing process on the total variation (TV) of CT image under data consistency constraint. For limited-angle problem due to the missing range of projection views the strength of data consistency constraint becomes direction relevant. In our work a new anisotropic total variation (ATV) minimization method is proposed. Instead of using image TV as the minimization objective, an ATV objective is designed which is combined of multiple 1D directional TV with different weights according to the actual scanned angular range. Experiments with simulated data demonstrate the advantages of our approach relative to the standard CS based reconstruction methods.
Multi-segment straight-line trajectory computed tomography (CT) requires accurate movement between each segment
trajectories. In industrial applications for large objects, it is difficult and very cost-expensive for the mechanical and
control system to ensure the accurate movement during the rotation process between two segment trajectories. Hence, precisely measuring the movement would be an alternative choice to acquire an exact reconstruction. In this work, we proposes a new method to measure the movement by using invariant moment of attenuation distribution images. Its accuracy improves as the number of projection data increases. The method is validated by both numerical and practical experiments. Reasonable CT reconstructions are obtained for a two-segment linear trajectory CT with coarse mechanical movement control modeled in our experiments.
The Poisson-like noise model has been widely used for noise suppression and image reconstruction in low dose computed tomography. Various noise estimation and suppression approaches have been developed and studied to enhance the image quality. Among them, the recently proposed generalized Anscombe transform (GAT) has been utilized to stabilize the variance of Poisson-Gaussian noise. In this paper, we present a variance estimation approach using GAT. After the transform, the projection data is denoised conventionally with an assumption that the noise variance is uniformly equals to 1. The difference of the original and the denoised projection is treated as pure noise and the global variance σ2 can be estimated from the residual difference. Thus, the final denoising step with the estimated σ2 is performed. The proposed approach is verified on a cone-beam CT system and demonstrated to obtain a more accurate estimation of the actual parameter. We also examine FBP algorithm with the two-step noise suppression in the projection domain using the estimated noise variance. Reconstruction results with simulated and practical projection data suggest that the presented approach could be effective in practical imaging applications.
Helical CT scan has been acknowledged to be a very useful scanning mode. Normally, the speed of bed movement per
rotation (pitch) for a helical CT is fixed to meet the requirement of CT scanning speed. To reduce system cost, single
slice helical CT (SSHCT) is often chosen in many applications. It is interesting and useful in real life to answer the
question that how to design the detector to obtain optimal performance of a SSHCT in a detection task. In this work, we
applied ROC study for the optimization of detector thickness along the direction of rotation axis for our SSHCT.
Numerical simulations followed by human observer studies are done in this investigation. Compound Gaussian noises
are modeled in our numerical simulations for objects both with and without lesions. An analytical FBP reconstruction
method with rebinning is used for noisy data reconstruction. It can be seen in the reconstructions that thin detectors lead
to artifacts, and that thick detectors lead to lesion blurring and lower contrast. All these impact on lesion detection in the
practical imaging applications. According to our ROC tests done on images from five choices of detector thickness,
optimal performance is obtained when choosing detector thickness being around 1~1.25 times of the helical pitch.
Moreover, we find that, under different noise level, the optimal point is about the same.
Some other figures of merit including SNR and HTC are also calculated and examined in this work. The results relates
well with the results of AUCs. It shows that, they could serve very well as the indicators for system optimization when
few non-linear physical effect and reconstruction processing are involved.
Cosmic ray muon radiography which has a good penetrability and sensitivity to high-Z materials is an effective way for
detecting shielded nuclear materials. Reconstruction algorithm is the key point of this technique. Currently, there are two
main algorithms about this technique. One is the Point of Closest Approach (POCA) reconstruction algorithm which
uses the track information to reconstruct; the other is the Maximum Likelihood estimation, such as the Maximum
Likelihood Scattering (MLS) and the Maximum Likelihood Scattering and Displacement (MLSD) reconstruction
algorithms which are proposed by the Los Alamos National Laboratory (LANL). The performance of MLSD is better
than MLS. Since MLSD reconstruction algorithm includes scattering and displacement information while MLS reconstruction algorithm only includes scattering information. In order to get this Maximum Likelihood estimation, in this paper, we propose to use EM method to get the estimation (MLS-EM and MLSD-EM). Then, in order to saving reconstruction time we use the OS technique to accelerate MLS and MLSD reconstruction algorithm with the initial value set to be the result of the POCA reconstruction algorithm. That is, the Maximum Likelihood Scattering-OSEM (MLS-OSEM) and the Maximum Likelihood Scattering and Displacement-OSEM (MLSD-OSEM). Numerical simulations show that the MLSD-OSEM is an effective algorithm and the performance of MLSD-OSEM is better than MLS-OSEM.
Helical cone-beam CT is widely used nowadays because of its rapid scan speed and efficient utilization of x-ray dose.
HCT-FDK is an effective reconstruction algorithm on Helical CT. However, like other 3D reconstruction algorithms,
HCT-FDK is time consuming because of its large amount of data processing including the convolution and 3D-3D back
projection. Recently, GPU is widely used to parallel many reconstruction algorithms. The latest GPU has some nice
features, such as large memory, lots of processors, fast 3D texture mapping, and flexible frame buffer object. All these
features help reconstruction a lot. In this paper, we present a solution to this problem with GPU. First, we bring a lookup
table into HCT-FDK. Then, both convolution and back projection are implemented on GPU. At last, the reconstruction
result is directly smoothed and visualized by GPU. Experimental results are given to compare among CPU and two
generations of GPU: Geforce 6800GT and Geforce 8800GTX. The comparison was applied both on simulation data and
real data. We show that, GPU-accelerated HCT-FDK gets result with similar levels of noise and clarity but gains a speed
increase of about 10-100 times faster than using CPU only. With its newer feature, Geforce 8800GTX can get a similar
quality like Geforce 6800GT and about 20 times faster.
In today's tomographic imaging, there are more incomplete data systems, such as few-view system. The advantage
of few-view tomography is less x-ray dose and reduced scanning time. In this work, we study the projection
distribution in few-view fan-beam imaging. It is one of the fundamental problems in few-view imaging because
of its severe lack of projection data. The aim is to reduce data redundancy and to improve the quality of reconstructed
images by research on projection distribution schemes. The reconstruction algorithm for few-view
imaging is based on algebraic reconstruction techniques (ART) and total variation (TV) constraint approached
by E. Sidky et al in 2006. Study of few-view fan-beam projection distribution is performed mainly through
comparison of several distribution types in projection space and reconstructed images. Results show that the
distribution called short-scan type obtains the best image in five typical distributions.
The aim of the present study is to investigate a type of Bayesian reconstruction which utilizes partial differential
equations (PDE) image models as regularization. PDE image models are widely used in image restoration and
segmentation. In a PDE model, the image can be viewed as the solution of an evolutionary differential equation. The
variation of the image can be regard as a descent of an energy function, which entitles us to use PDE models in Bayesian
reconstruction. In this paper, two PDE models called anisotropic diffusion are studied. Both of them have the
characteristics of edge-preserving and denoising like the popular median root prior (MRP). We use PDE regularization
with an Ordered Subsets accelerated Bayesian one step late (OSL) reconstruction algorithm for emission tomography.
The OS accelerated OSL algorithm is more practical than a non-accelerated one. The proposed algorithm is called
OSEM-PDE. We validated the OSEM-PDE using a Zubal phantom in numerical experiments with attenuation correction
and quantum noise considered, and the results are compared with OSEM and an OS version of MRP (OSEM-MRP)
reconstruction. OSEM-PDE shows better results both in bias and variance. The reconstruction images are smoother and
have sharper edges, thus are more applicable for post processing such as segmentation. We validate this using a k-means
segmentation algorithm. The classic OSEM is not convergent especially in noisy condition. However, in our experiment,
OSEM-PDE can benefit from OS acceleration and keep stable and convergent while OSEM-MRP failed to converge.
A new imaging configuration whose trajectory is a multisegment straight line is investigated, and a practical reconstruction algorithm is proposed. It is a natural extension of an imaging configuration with a straight-line trajectory. These kinds of scanning systems may be useful in industry and security inspections. As is known, projection data from a single straight-line trajectory are incomplete and their reconstruction suffers from a limited-angle problem. A multisegment straight-line trajectory can be used to compensate for this deficiency. To reconstruct images, a practical reconstruction algorithm is derived. It is of the Feldkamp-Davis-Kress (FDK) type, and is efficient and straightforward. Like the FDK algorithm, our reconstruction is exact in the midplane and can be exact everywhere if the density of the scanned object is independent of the direction z, though the integral of the reconstructed image along z is no longer preserved. Numerical simulations validate our method.
A computed tomography (CT) imaging configuration with a straight-line trajectory is investigated, and a direct filtered backprojection (FBP) algorithm is presented. This kind of system may be useful for industrial applications and security inspections. Projections from a straight-line trajectory have a special property where data from each detector element correspond to a parallel-beam projection of a certain view angle. However, the sampling steps of parallel beams differ from view to view. Rebinning raw projections into uniformly sampled parallel-beam projections is a common choice for this type of reconstruction problem. However, the rebinning procedure suffers a loss of spatial resolution because of interpolations. Our reconstruction method is first derived from the Fourier slice theorem, where a coordinate transform and geometrical relations in projection and backprojection are used. It is then extended to 3-D scanning geometry. Finally, data-shift preprocessing is proposed to reduce computation and memory requirements by removing useless projections in raw data. In this method, the spatial resolution is better preserved and the reconstruction is less sensitive to data truncation than in the rebinning-to-parallel-beam method. To deal with limited angle problem, an iterative reconstruction reprojection method is introduced to estimate missing data and improve the image quality.
Metal artifacts arise in CT images when X-rays traverse the high attenuating objects such as metal bodies. Portions of projection data become unavailable. In this paper, we present an Euler's elastica and curvature based sinogram inpainting (EECSI) algorithm for metal artifacts reduction, where "inpainting" is a synonym for "image interpolation". In EECSI, the unavailable data are regarded as occlusion and can be inpainted inside the inpainting domain based on elastica interpolants. Numerical simulations demonstrate that, compared to conventional interpolation methods, the algorithm proposed connects the unavailable projection region more smoothly and accurately, thus better reduces metal artifacts and more accurately reveals cross section structures, especially in the immediate neighborhood of the metallic objects.
In this paper we present a backprojection filtered type (BPF-type) reconstruction algorithm for cone-beam circular scans based on Zou and Pan's work. The algorithm could use all the projection data passing through the PI-line segments in 2π scanning range. Because all the projection data in 2π is used, the algorithm has a good quality for practical noisy projection data. The algorithm is implemented using numerical and practical experiments. The practical experiments were done on our X-ray CT system with a flat-panel detector. We also compare the results with FDK reconstructions. From the experimental results, we deem that the BPF algorithm could satisfy the requirement of the X-ray CT inspection.
T-FDK algorithm is an FDK-type cone beam CT reconstruction algorithm. Like other 3D reconstruction algorithms, T-FDK is time consuming because of the large amount of data processing involved. One solution to this problem is utilizing PC graphics boards (GPU) for acceleration. The recent dramatic evolution of GPU makes this method come to the practical track. In this paper, we use a new floating point GPU to speedup the 3D T-FDK algorithm that is different from original FDK method in structure. Because floating point pipelines are slower than hardwired 8-bit texture mapping facilities but are more precise numerically, we balance the reconstruction speed and quality by using both of them. Using nVIDIA GeForce 6800 GT, our GPU accelerated T-FDK method gives a speed 27.612 times faster than a software implementation.
Image segmentation is a classical and challenging problem in image processing and computer vision. Most of the segmentation algorithms, however, do not consider overlapped objects. Due to the special characteristics of X-ray imaging, the overlapping of objects is very commonly seen in X-ray images and needs to be carefully dealt with. In this paper, we propose a novel energy functional to solve this problem. The Euler-Lagrange equation is derived and the segmentation is converted to a front propagating problem that can be efficiently solved by level set methods. We noticed that the proposed energy functional has no unique extremum and the solution relies on the initialization. Thus, an initialization method is proposed to get satisfying results. The experiment on real data validated our proposed method.
Cupping artifact is one of the most serious problems in a middle-low energy X-ray Flat panel detector (FPD)-based cone beam CT system. Both beam hardening effects and scatter could induce cupping artifacts in reconstructions and degrade image quality. In this paper, a two-step cupping-correction method is proposed to eliminate cupping: 1) scatter removal; 2) beam hardening correction. By experimental measurement using Beam Stop Array (BSA), the X-ray scatter distribution of a specific object is estimated in the projection image. After interpolation and subtraction, the primary intensity of the projection image is computed. The scatter distribution can also be obtained using convolution with a low-pass filter as kernel. The linearization is used as beam hardening correction method for one-material object. For two-material cylindrical objects, a new approach without iteration involved is present. There are three processes in this approach. Firstly, correct raw projections by the mapping function of the outer material. Secondly, reconstruct the cross-section image from the modified projections. Finally, scale the image by a simple weighting function. After scatter removal and beam hardening correction, the cupping artifacts are well removed, and the contrast of the reconstructed image is remarkably improved.
Optical Character Recognition (OCR) is a classical research field and has become one of most successful applications in the area of pattern recognition. Feature extraction is a key step in the process of OCR. This paper presents three algorithms for feature extraction based on binary images: the Lattice with Distance Transform (DTL), Stroke Density (SD) and Co-occurrence Matrix (CM). DTL algorithm improves the robustness of the lattice feature by using distance transform to increase the distance of the foreground and background and thus
reduce the influence from the boundary of strokes. SD and CM algorithms extract robust stroke features base on the fact that human recognize characters according to strokes, including length and orientation. SD reflects the quantized stroke information including the length and the orientation. CM reflects the length and orientation of a contour. SD and CM together sufficiently describe strokes. Since these three groups of feature vectors complement each other in expressing characters, we integrate them and adopt a hierarchical algorithm to achieve optimal performance. Our methods are tested on the USPS (United States Postal Service) database and the Vehicle License Plate Number Pictures Database (VLNPD). Experimental results shows that the methods gain high recognition rate and cost reasonable average running time. Also, based on similar condition, we compared our results to the box method proposed by Hannmandlu [18]. Our methods demonstrated better performance in efficiency.
An efficient noise treatment scheme has been developed to achieve low-dose CT diagnosis based on currently available CT hardware and image reconstruction technologies. The scheme proposed includes two main parts: filtering in sinogram domain and smoothing in image domain. The acquired projection sinograms were first treated by our previously proposed Karhunen-Loeve (K-L) domain penalized weighted least-square (PWLS) filtering, which fully utilizes the prior statistical noise property and three-dimensional (3D) spatial information for an accurate restoration of the low-dose projections. To treat the streak artifacts due to photon starvation, we also incorporated an adaptive filtering into our PWLS framework, which selectively smoothes those channels contributing most to the streak artifacts. After the sinogram filtering, the image was reconstructed by the conventional filtered backprojection (FBP) method. The image is assumed as piecewise regions each has a unique texture. Therefore, an edge-preserving smoothing (EPS) with locally-adaptive parameters to the noise variation was applied for further noise reduction in image domain. Experimental phantom projections acquired by a GE spiral computed tomography (CT) scanner under 10 mAs tube current were used to evaluate the proposed smoothing scheme. The reconstructed imaged demonstrated that the smoothing scheme with appropriate control parameters provides a significant improvement on noise suppression without sacrificing the spatial resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.