Computed Tomography (CT) is a high-precision medical imaging technique that utilizes X-rays and computer reconstruction to provide detailed three-dimensional images of human anatomy. It is used for clinical diagnosis and treatment. Non-ideal scanning conditions often occur, including the presence of metal implants in the human body and limited-angle scanning. These non-ideal conditions result in serious metal artifacts and limited-angle artifacts. To address the above challenge, in this paper, we propose a novel deep dual-domain progressive diffusion network, namely DPD-Net, to jointly suppress metal artifact and limited-angle artifact for the first time. DPD-Net leverages the advantage of dual-domain strategy for limited-angle artifact suppression in image-domain and metal trace inpainting in sinogram-domain simultaneously. To sufficiently solve dual-artifact problem, the dual-domain generative diffusion models are designed for data distribution learning. The proposed DPD-Net is trained and evaluated on a publicly available dataset. Extensive experimental results validate that the proposed method outperforms the state-of-the-art competing methods.
Parallel imaging is widely used in the clinic to accelerate magnetic resonance imaging (MRI) data collection. However, conventional reconstruction techniques for parallel imaging still face significant challenges in achieving satisfactory performance at high acceleration rates. It results in artifacts and noise that affect the subsequent diagnosis. Recently, implicit neural representation (INR) has emerged as a new deep learning paradigm that represents an object as a continuous function of spatial coordinates. INR’s continuity in representation enhances the model’s capacity to capture redundant information within the object. However, it usually needs thousands of training iterations to reconstruct the image. In this work, we proposed a method to speed up INR for parallel MRI reconstruction using hash-mapping and a pre-trained encoder. It enables INR to achieve better results with fewer training iterations. Benefiting from INR’s powerful representations, the proposed method outperforms existing methods in removing the aliasing artifacts and noise. The experimental results on simulated and real undersampled data demonstrate the model’s potential for further accelerating parallel MRI.
Due to the high cost of high-field MRI equipment, low-field MRI systems are still widely used in small and medium-sized hospitals. Compared to high-field MRI, images acquired from low-field MRI often suffer from lower resolution and lower signal-to-noise ratios. And the analysis of clinical data reveals that noise levels can vary significantly across different low-field MRI protocols. In this study, we propose an effective super-resolution reconstruction model based on generative adversarial networks (GAN). The proposed model can implicitly differentiate between various sequence types, allowing it to adapt to different scan protocols during reconstruction process. To further enhance image detail, a one-to-many supervision strategy is employed during the training process, utilizing similar patches within a single image. Additionally, the number of basic blocks in the model is reduced through knowledge distillation to meet the speed requirements for clinical use. The experimental results on actual 0.35T low-field MR images suggest that the proposed method holds substantial potential for clinical application.
Removing ring artifacts presents a significant challenge in x-ray computed tomography (CT) systems, particularly in those utilizing photon-counting detectors. To solve this problem, this study proposes the Inter-slice Complementarity Enhanced Ring Artifact Removal (ICE-RAR) algorithm, which is based on a learning-based approach. The variability and complexity of detector responses make it challenging to acquire enough paired data for training neural networks in real-world scenarios. To address this, the research first introduces a data simulation strategy that incorporates the characteristics of specific systems in accordance with the principles of ring artifact formation. Following this, a dual-branch neural network is designed, consisting of a global artifact removal branch and a central region enhancement branch, aimed at improving artifact removal, especially in the central region of interest where artifacts are more difficult to eliminate. Additionally, considering the independence of different detector element responses, the study proposes leveraging inter-slice complementarity to improve image restoration. The effectiveness of the central region reinforcement and inter-slice complementarity was confirmed through ablation experiments on simulated data. Both simulated and real-world results demonstrated that the ICE-RAR method effectively reduces ring artifacts while preserving image details. More importantly, by incorporating specific system characteristics into the data simulation process, models trained on simulated data can be directly applied to unseen real data, presenting significant potential for addressing ring artifact removal (RAR) issue in practical CT systems.
As DECT becomes widely accepted in the field of diagnostic radiology, there is growing interest in using dual-energy imaging to improve other scenarios. In this context, a new mobile dual-source dual-energy CBCT is being developed for scenarios such as radiotherapy and interventional radiology. The device performs dual-energy measurements by utilizing two X-ray sources mounted side-by-side in the z-axis direction, causing the problem of a mismatch in the fields of view of high-energy and low-energy sources in the z-axis. To solve this problem, this study proposes a method based on deep learning to generate high-energy and low-energy CT images in the missing fields of view. This method can generate high-energy (or low-energy) images from low-energy (or high-energy) images, and then complete the information in the missing fields of view. Furthermore, to enhance the quality of the generated images, a plug-and-play frequency-domain Mamba module is designed to extract frequency-domain features in the latent space, and then the redundant feature maps are filtered out through the designed frequency channel filtering module so that the model can pay more attention to learn and extract the effective features. Experimental results on the simulated data show that the proposed method can effectively generate the missing low- and high-energy CT images, and the SSIM, PSNR, and MAE are up to 99.3%, 48.1dB, and 6.3HU, respectively. Moreover, the generated images could maintain good continuity in the z-axis, which means that our method can effectively ensure the consistency in the fields of view of dual sources. In addition, our model can be further fine-tuned online using the paired dual-energy data in the overlap fields of view when dealing with data from unseen patients, constructing the patient-specific model to ensure the robustness against different samples.
Compressed sensing (CS) computed tomography has been proven to be important for several clinical applications, such as sparse-view computed tomography (CT), digital tomosynthesis and interior tomography. Traditional compressed sensing focuses on the design of handcrafted prior regularizers, which are usually image-dependent and time-consuming. Inspired by recently proposed deep learning-based CT reconstruction models, we extend the state-of-the-art LEARN model to a dual-domain version, dubbed LEARN++. Different from existing iteration unrolling methods, which only involve projection data in the data consistency layer, the proposed LEARN++ model integrates two parallel and interactive subnetworks to perform image restoration and sinogram inpainting operations on both the image and projection domains simultaneously, which can fully explore the latent relations between projection data and reconstructed images. The experimental results demonstrate that the proposed LEARN++ model achieves competitive qualitative and quantitative results compared to several state-of-the-art methods in terms of both artifact reduction and detail preservation.
As a quantitative CT imaging technique, the dual-energy CT (DECT) imaging method attracts a lot of research interests. However, material decomposition from high energy (HE) and low energy (LE) data may suffer from magnified noise, resulting in severe degradation of image quality and decomposition accuracy. To overcome these challenges, this study presents a novel DECT material decomposition method based on deep neural network (DNN). In particular, this new DNN integrates the CT image reconstruction task and the nonlinear material decomposition procedures into one single network. This end-to-end network consists of three compartments: the sinogram domain decomposition compartment, the user-defined analytical domain transformation operation (OP) compartment, and the image domain decomposition compartment. By design, both the first and third compartments are responsible for complicated nonlinear material decomposition, while denoising the DECT images. Natural images are used to synthesized the dual-energy data with assumed certain volume fractions and density distributions. By doing so, the burden of collecting clinical DECT data can be significantly reduced, therefore the new DECT reconstruction framework becomes more easy to be implemented. Both numerical and experimental validation results demonstrate that the proposed DNN based DECT reconstruction algorithm can generate high quality basis images with improved accuracy.
In a standard computed tomography (CT) image, pixels having the same Hounsfield Units (HU) can correspond to different materials and it is therefore challenging to differentiate and quantify materials. Dual-energy CT (DECT) is desirable to differentiate multiple materials, but DECT scanners are not widely available as singleenergy CT (SECT) scanners. Here we develop a deep learning approach to perform DECT imaging by using standard SECT data. The end point of the deep learning approach is a model capable of providing the high-energy CT image for a given input low-energy CT image. We retrospectively studied 22 patients who received contrast-enhanced abdomen DECT scan. The difference between the predicted and original high-energy CT images are 3.47 HU, 2.95 HU, 2.38 HU, and 2.40 HU for spine, aorta, liver and stomach, respectively. The difference between virtual non-contrast (VNC) images obtained from original DECT and deep learning DECT are 4.10 HU, 3.75 HU, 2.33 HU and 2.92 HU for spine, aorta, liver and stomach, respectively. The aorta iodine quantification difference between iodine maps obtained from original DECT and deep learning DECT images is 0.9%. This study demonstrates that highly accurate DECT imaging with single low-energy data is achievable by using a deep learning approach. The proposed method can significantly simplify the DECT system design, reducing the scanning dose and imaging cost.
The X-ray computer tomography (CT) scanner has been extensively used in medical diagnosis. How to reduce radiation dose exposure while maintain high image reconstruction quality has become a major concern in the CT field. In this paper, we propose a statistical iterative reconstruction framework based on structure tensor total variation regularization for low dose CT imaging. An accelerated proximal forward-backward splitting (APFBS) algorithm is developed to optimize the associated cost function. The experiments on two physical phantoms demonstrate that our proposed algorithm outperforms other existing algorithms such as statistical iterative reconstruction with total variation regularizer and filtered back projection (FBP).
When the scan field of view (SFOV) of a CT system is not large enough to enclose the entire cross-section
of a patient, or the patient needs to be intentionally positioned partially outside the SFOV for certain clinical
CT scans, truncation artifacts are often observed in the reconstructed CT images. Conventional wisdom to
reduce truncation artifacts is to complete the truncated projection data via data extrapolation with different
a priori assumptions. This paper presents a novel truncation artifact reduction method that directly works
in the CT image domain. Specifically, a discriminative dictionary that includes a sub-dictionary of truncation
artifacts and a sub-dictionary of non-artifact image information was used to separate a truncation artifact-contaminated
image into two sub-images, one with reduced truncation artifacts, and the other one containing
only the truncation artifacts. Both experimental phantom and retrospective human subject studies have been
performed to characterize the performance of the proposed truncation artifact reduction method.
Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.
Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.