For the U-Net based low dose CT (LDCT) imaging, there remains an interesting question: can the LDCT imaging neural network trained at one image resolution be transferred and applied directly onto another LDCT imaging application of different image resolution, provided that both the noise level and the structural content are similar? To answer this question, numerical simulations are performed with high-resolution (HR) and low-resolution (LR) LDCT images having comparable noise levels. Results demonstrated that the U-Net trained with LR CT images can be used to effectively reduce the noise on HR CT images, and vice versa. However, additional artifacts may be generated when transferring the same U-Net to a different LDCT imaging task with varied image spatial resolution due to the noise induced 2D features. For example, noticeable bright spots were generated at the edges of the FOV when the HR CT image is denoised by the LR CT image trained U-Net. In conclusion, this study suggests that it is necessary to retrain the U-Net for a dedicated LDCT imaging application.
Dual-energy computed tomography (DECT) is a promising imaging modality. It has the potential to quantify different material densities and plays an important role in many clinical applications. To enable multiple material decomposition (MMD), the conventional analytical MMD algorithm assumes the presence of at most three materials in each image pixel, and each pixel is decomposed into a certain basis material triplet. However, the MMD algorithm requires strong prior knowledge of the mixture composition, and the decomposition performance is compromised around the boundaries of different compositions. In this work, we developed an analytical model based deep neural network MMD-Net to achieve multi-material decomposition in DECT. In particular, the type of the basis material triplet in each image pixel and the attenuation coefficients of each material are learned by dedicated convolution neural network modules, and the material-specific density maps are obtained from the analytical MMD algorithm. Physical experiments of a pig leg and a pork backbone specimen with inserted iodine concentrations were acquired to evaluate the performance of the MMD-Net. Results show that the proposed MMD-Net could provide high decomposition accuracy, and reduce the decomposition artifacts.
As one of the most advanced CT imaging modalities, spectral CT plays important roles in generating materialspecific information and adding vital clinical values for disease diagnosis and therapy. To obtain the spectral CT images, currently, advanced X-ray source or detector assembly is often required, which significantly increases the hardware cost and the patient burden. As a consequence, the accessibility of spectral CT is strongly limited and has not been widely implemented in daily clinics. To solve such difficulty, this work attempts to investigate a new CT data acquisition protocol and spectral CT image reconstruction algorithm. In particular, the X-ray tube voltage is slowly modulated during the gantry rotation. By doing so, spectral information that varies from one projection view to another can be acquired in one single CT scan. Afterwards, a model-based material decomposition algorithm that reconstructs the CT image from the acquired projections is utilized to perform multi-material decomposition. To evaluate the performance of this novel spectral CT imaging approach, a numerical phantom containing iodine and gadolinium solutions is imaged with different kVp modulations, i.e., different number of modulation periods per rotation. Results demonstrate that the proposed spectral CT image reconstruction algorithm can be used to accurately decompose the water, iodine and gadolinium basis images for different tube voltage modulation rates. Moreover, high-quality monochromatic images can be synthesized as well. In conclusion, a low-cost multi-material spectral CT imaging approach is developed based on the slow tube voltage modulation method.
The cone-beam CT (CBCT) imaging systems that based on flat panel detectors have been widely implemented in image-guided intervention and radiation therapy applications. However, the imaging performance of CBCT is strongly limited. One of such limitations is the lack of quantitative imaging capability, which is important for material recognition, image contrast enhancement, and dose reduction. Over the past decade, dual-energy computed tomography (DECT) has become a promising imaging technique in generating quantitative material information, whereas, multiple (>2) basis images with high quality and accuracy are hard to be obtained from the conventional DECT image reconstruction algorithms. In this work, an innovative deep learning technique is presented to realize three materials decomposition from the dual-energy CBCT scans. In this strategy, a dedicated end-to-end convolutional neural network (CNN) is developed. It accepts the low and high energy CBCT projections, and automatically outputs three different basis image volumes (water basis, iodine basis, CaCl2 basis) with high accuracy. Training data was synthesized numerically from the photos downloaded from ImageNet. Dual-energy projections of the Iodine/CaCl2 phantom with ground truth were acquired from our in-house benchtop CBCT system to validate the proposed method. Results demonstrate that this novel network is able to generate three different material bases with high accuracy (decomposition errors less than 5%). In conclusion, the proposed CNN based multi-material (≥ 3) decomposition approach shows promising benefits in high quality dual-energy CBCT imaging applications.
The purpose of this study is to evaluate and compare the quantitative imaging performance of the dual-energy CT (DECT) and differential phase contrast CT (DPCT). The electron density (ρe) and the effective atomic number (Zeff) are selected as the two comparison bases for the DECT and DPCT imaging. From the numerically simulated data, image domain based decomposition algorithms are used to extract the ρe and Zeff information for three different spatial resolution levels (0.3 mm, 0.1 mm, and 0.03 mm). The contrast-to-noise-ratio (CNR) and modeled human observer studies have been investigated to compare the DECT and DPCT quantitative imaging performance. At low spatial resolution (0.3 mm), the DECT shows better quantitative imaging performance than DPCT. As a contrary, the DPCT outperforms the DECT for ultra high spatial resolution (0.03 mm) imaging. With the 0.1 mm spatial resolution, the DECT and DPCT shows similar quantitative imaging performance. In conclusion, the DECT is more favored for low spatial resolution applications, such as the diagnostic imaging tasks. However, the DPCT would be recommended for ultra high spatial resolution imaging tasks, such as the micro-CT imaging tasks.
Significance: Single-molecule localization-based super-resolution microscopy has enabled the imaging of microscopic objects beyond the diffraction limit. However, this technique is limited by the requirements of imaging an extremely large number of frames of biological samples to generate a super-resolution image, thus requiring a longer acquisition time. Additionally, the processing of such a large image sequence leads to longer data processing time. Therefore, accelerating image acquisition and processing in single-molecule localization microscopy (SMLM) has been of perennial interest.
Aim: To accelerate three-dimensional (3D) SMLM imaging by leveraging a computational approach without compromising the resolution.
Approach: We used blind sparse inpainting to reconstruct high-density 3D images from low-density ones. The low-density images are generated using much fewer frames than usually needed, thus requiring a shorter acquisition and processing time. Therefore, our technique will accelerate 3D SMLM without changing the existing standard SMLM hardware system and labeling protocol.
Results: The performance of the blind sparse inpainting was evaluated on both simulation and experimental datasets. Superior reconstruction results of 3D SMLM images using up to 10-fold fewer frames in simulation and up to 50-fold fewer frames in experimental data were achieved.
Conclusions: We demonstrate the feasibility of fast 3D SMLM imaging leveraging a computational approach to reduce the number of acquired frames. We anticipate our technique will enable future real-time live-cell 3D imaging to investigate complex nanoscopic biological structures and their functions.
Reducing the radiation dose is always an important topic in modern computed tomography (CT) imaging. As the dose level reduces, the conventional analytical filtered backprojection (FBP) reconstruction algorithm becomes inefficient in generating satisfactory CT images for clinical applications. To overcome such difficulties, in this study we developed a novel deep neural network (DNN) for low dose CT image reconstruction by exploring the simultaneous sinogram domain and CT image domain denoising capabilities. The key idea is to jointly denoise the acquired sinogram and the reconstructed CT image, while reconstructing CT image in an end-to-end manner with the help of DNN. Specifically, this new DNN contains three compartments: the sinogram domain denoising compartment, the sinogram to CT image reconstruction compartment, and the CT image domain denoising compartment. This novel sinogram and image domain based CT image reconstruction network is named as ADAPTIVE-NET. By design, the first and third compartments of ADAPTIVE-NET can mutually update their parameters for CT image denoising during network training. Clearly, one advantage of using ADAPTIVE-NET is that the unique information stored in sinogram can be accessed directly during network training. Validation results obtained from numerical simulations demonstrate that this newly proposed ADAPTIVE-NET can effectively improve the quality of CT images acquired with low radiation dose levels.
As a quantitative CT imaging technique, the dual-energy CT (DECT) imaging method attracts a lot of research interests. However, material decomposition from high energy (HE) and low energy (LE) data may suffer from magnified noise, resulting in severe degradation of image quality and decomposition accuracy. To overcome these challenges, this study presents a novel DECT material decomposition method based on deep neural network (DNN). In particular, this new DNN integrates the CT image reconstruction task and the nonlinear material decomposition procedures into one single network. This end-to-end network consists of three compartments: the sinogram domain decomposition compartment, the user-defined analytical domain transformation operation (OP) compartment, and the image domain decomposition compartment. By design, both the first and third compartments are responsible for complicated nonlinear material decomposition, while denoising the DECT images. Natural images are used to synthesized the dual-energy data with assumed certain volume fractions and density distributions. By doing so, the burden of collecting clinical DECT data can be significantly reduced, therefore the new DECT reconstruction framework becomes more easy to be implemented. Both numerical and experimental validation results demonstrate that the proposed DNN based DECT reconstruction algorithm can generate high quality basis images with improved accuracy.
Studies have shown that the conventionally estimated visibility and differential phase signals in grating-based Talbot-Lau imaging system are intrinsically biased signals. Since such bias are mainly caused by applying the conventional signal estimation approach on noisy data, therefore, it remains an open question whether there has a better signal estimation method to reduce such bias. To answer this question, we proposed an end-to-end supervised deep computed signal estimation network (XP-NET) to extract the three unknown signals, i.e., the absorption, the dark-field, and the phase contrast. Numerical phase stepping data generated from natural images are utilized to train the network. Afterwards, both numerical and experimental studies are performed to validate the performance of the proposed XP-NET method. Results show that for high radiation dose levels, signals retrieved from the XP-NET method are identical as obtained from the conventional analytical method. However, the XP-NET method has the capability of reducing phase signal bias by as much as 15% when the radiation dose levels gets lower. As the phase signal becomes more unbiased, the phase images get more accurate.
Low-dose computed tomography (CT) has attracted much attention in clinical applications since X-ray radiations can cause serious health risks to patients. Sparse-view CT imaging is one of the major ways to reduce radiation dose. However, if reconstructing the sparse-view CT images with conventional filtered backprojection (FBP) algorithm, image quality may be significantly degraded due to the severe streaking artifacts. Therefore, iterative sparse-view CT image reconstruction (IR) algorithms have been developed and utilized to improve image quality. One drawback of using the IR algorithms is they usually spend long computation time. Additionally, adjusting and optimizing the hyper-parameters that are needed during iteration procedures is also time-consuming, some- times may even depend on individual experience. These potential drawbacks strongly limit the wide applications of IR algorithms. Aiming at partially overcome such difficulties, in the present work, we propose a deep iterative reconstruction (DIR) framework to generalize the conventional IR algorithms by mapping them into the deep neural network (DNN) technique. With this proposed DIR algorithms, the prior term, the data fidelity term, and the hyper-parameters can all be represented and learned by network. By doing so, some generalized iterative models can be used to perform high quality sparse-view CT image reconstructions. Numerical experiments based on clinical patient data demonstrated that the proposed DIR algorithms can mitigate the streaking artifacts more effectively while well preserving the subtle structures.
Limited-angle tomography has gained much interest in late years Nevertheless, image reconstruction from incomplete projections is a classic ill-posed issue in the field of computational imaging. In this paper, we propose a scheme based on the sparsifying operators and approximation of ℓ0-minimization. Our framework includes two main components, one for a sparsifying operator, and one for learning the scheme parameters using ℓ0-minimization from insufficient computed tomography data. Thus, the proposed scheme is capable of recovering high quality reconstructions at a range of angles and noise. Compared to the total-variation (TV) regularized reconstruction scheme, σ-u scheme and ATV (Anisotropic Total Variation) scheme, validations using Shepp-Logan phantom computed tomography data demonstrate the significant improvements in SNR and suppressed noise and artifacts.
In this work, we realize the image-domain backproject-filter (BPF) CT image reconstruction using the convolutional neural network (CNN) method. Within this new CT image reconstruction framework, the acquired sinogram data is backprojected first to generate the highly blurred laminogram. Afterwards, the laminogram is feed into the CNN to retrieve the desired sharp CT image. Both numerical and experimental results demonstrate that this new CNN-based image reconstruction method is feasible to reconstruct CT images with maintained high spatial resolution and accurate pixel values from the laminogram as of from the conventional FBP method. The experimental results also show that the performance of this new CT image reconstruction network does not rely on the used radiation dose level. Due to these advantages, this proposed innovative CNN-based image-domain BPF type image reconstruction strategy provides promising prospects in generating high quality CT images for future clinical applications.
In dental computed tomography (CT) scanning, high-quality images are crucial for oral disease diagnosis and treatment. However, many artifacts, such as metal artifacts, downsampling artifacts and motion artifacts, can degrade the image quality in practice. The main purpose of this article is to reduce motion artifacts. Motion artifacts are caused by the movement of patients during data acquisition during the dental CT scanning process. To remove motion artifacts, the goal of this study was to develop a dental CT motion artifact-correction algorithm based on a deep learning approach. We used dental CT data with motion artifacts reconstructed by conventional filtered back-projection (FBP) as inputs to a deep neural network and used the corresponding high-quality CT data as labeled data during training. We proposed training a generative adversarial network (GAN) with Wasserstein distance and mean squared error (MSE) loss to remove motion artifacts and to obtain high-quality CT dental images. In our network, to improve the generator structure, the generator used a cascaded CNN-Net style network with residual blocks. To the best of our knowledge, this work describes the first deep learning architecture method used with a commercial cone-beam dental CT scanner. We compared the performance of a general GAN and the m-WGAN. The experimental results confirmed that the proposed algorithm effectively removes motion artifacts from dental CT scans. The proposed m-WGAN method resulted in a higher peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) and a lower root-mean-squared error (RMSE) than the general GAN method.
In this work, we present a novel convolutional neural network (CNN) enabled Moiré artifacts reduction framework for the three contrast mechanism images, i.e., the absorption image, the differential phase contrast (DPC) image, and the dark-field (DF) image, obtained from an x-ray Talbot-Lau phase contrast imaging system. By mathematically model the various potential non-ideal factors that may cause Moiré artifacts as a random fluctuation of the phase stepping position, rigorous theoretical analyses show that the Moiré artifacts on absorption images may have similar distribution frequency as of the detected phase stepping Moiré diffraction fringes, whereas, their periods on DPC and DF images may be doubled. Upon these theoretical findings, training dataset for the three different contrast mechanisms are synthesized properly using natural images. Afterwards, the three datasets are trained independently by the same modified auto-encoder type CNN. Both numerical simulations and experimental studies are performed to validate the performance of this newly developed Moiré artifacts reduction method. Results show that the CNN is able to reduce residual Moiré artifacts efficiently. With the improved signal accuracy, as a result, the radiation dose efficiency of the Talbot-Lau interferometry imaging system can be greatly enhanced.
Compressed sensing (CS) is a technology to acquire and reconstruct sparse signals below the Nyquist rate. For images, total variation of the signal is usually minimized to promote sparseness of the image in gradient. However, similar to all L1-minimization algorithms, total variation has the issue of penalizing large gradient, thus causing large errors on image edges. Many non-convex penalties have been proposed to address the issue of L1 minimization. For example, homotopic L0 minimization algorithms have shown success in reconstructing images from magnetic resonance imaging (MRI). Homotopic L0 minimization may suffer from local minimum which may not be sufficiently robust when the signal is not strictly sparse or the measurements are contaminated by noise. In this paper, we propose a hybrid total variation minimization algorithm to integrate the benefits of both L1 and homotopic L0 minimization algorithms for image recovery from reduced measurements. The algorithm minimizes the conventional total variation when the gradient is small, and minimizes the L0 of gradient when the gradient is large. The transition between L1 and L0 of the gradients is determined by an auto-adaptive threshold. The proposed algorithm has the benefits of L1 minimization being robust to noise/approximation errors, and also the benefits of L0 minimization requiring fewer measurements for recovery. Experimental results using MRI data are presented to demonstrate the proposed hybrid total variation minimization algorithm yields improved image quality over other existing methods in terms of the reconstruction accuracy.
Compressed sensing has the potential to address the challenge of simultaneously requiring high temporal and spatial resolution in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), by randomly undersampling the k-space with a predesigned trajectory. However, the traditional variable density (VD) design scheme includes inherent randomness since many probability density functions (PDFs) correspond to a given acceleration factor and one fixed PDF can generate different trajectories. This randomness may translate to an uncertainty in kinetic parameter estimation. We first evaluate how the one-to-many mapping in trajectory design influences DCE parameter estimation when high reduction factors are used. Then we propose a robust design scheme by adaptively segmenting k-space into low- and high-frequency domains considering the specific characteristics for different subjects and only applying the VD scheme in the high-frequency domain. Simulation results demonstrate high accuracy and robustness compared to the VD design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.