In this work, we realize the image-domain backproject-filter (BPF) CT image reconstruction using the convolutional neural network (CNN) method. Within this new CT image reconstruction framework, the acquired sinogram data is backprojected first to generate the highly blurred laminogram. Afterwards, the laminogram is feed into the CNN to retrieve the desired sharp CT image. Both numerical and experimental results demonstrate that this new CNN-based image reconstruction method is feasible to reconstruct CT images with maintained high spatial resolution and accurate pixel values from the laminogram as of from the conventional FBP method. The experimental results also show that the performance of this new CT image reconstruction network does not rely on the used radiation dose level. Due to these advantages, this proposed innovative CNN-based image-domain BPF type image reconstruction strategy provides promising prospects in generating high quality CT images for future clinical applications.
In dental computed tomography (CT) scanning, high-quality images are crucial for oral disease diagnosis and treatment. However, many artifacts, such as metal artifacts, downsampling artifacts and motion artifacts, can degrade the image quality in practice. The main purpose of this article is to reduce motion artifacts. Motion artifacts are caused by the movement of patients during data acquisition during the dental CT scanning process. To remove motion artifacts, the goal of this study was to develop a dental CT motion artifact-correction algorithm based on a deep learning approach. We used dental CT data with motion artifacts reconstructed by conventional filtered back-projection (FBP) as inputs to a deep neural network and used the corresponding high-quality CT data as labeled data during training. We proposed training a generative adversarial network (GAN) with Wasserstein distance and mean squared error (MSE) loss to remove motion artifacts and to obtain high-quality CT dental images. In our network, to improve the generator structure, the generator used a cascaded CNN-Net style network with residual blocks. To the best of our knowledge, this work describes the first deep learning architecture method used with a commercial cone-beam dental CT scanner. We compared the performance of a general GAN and the m-WGAN. The experimental results confirmed that the proposed algorithm effectively removes motion artifacts from dental CT scans. The proposed m-WGAN method resulted in a higher peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) and a lower root-mean-squared error (RMSE) than the general GAN method.
In this work, we present a novel convolutional neural network (CNN) enabled Moiré artifacts reduction framework for the three contrast mechanism images, i.e., the absorption image, the differential phase contrast (DPC) image, and the dark-field (DF) image, obtained from an x-ray Talbot-Lau phase contrast imaging system. By mathematically model the various potential non-ideal factors that may cause Moiré artifacts as a random fluctuation of the phase stepping position, rigorous theoretical analyses show that the Moiré artifacts on absorption images may have similar distribution frequency as of the detected phase stepping Moiré diffraction fringes, whereas, their periods on DPC and DF images may be doubled. Upon these theoretical findings, training dataset for the three different contrast mechanisms are synthesized properly using natural images. Afterwards, the three datasets are trained independently by the same modified auto-encoder type CNN. Both numerical simulations and experimental studies are performed to validate the performance of this newly developed Moiré artifacts reduction method. Results show that the CNN is able to reduce residual Moiré artifacts efficiently. With the improved signal accuracy, as a result, the radiation dose efficiency of the Talbot-Lau interferometry imaging system can be greatly enhanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.