Interior photon-counting computed tomography (PCCT) scans are essential for obtaining high-resolution images at minimal radiation dose by focusing only on a region of interest. However, designing a deep learning model for denoising a PCCT interior scan is rather challenging. Recently, several studies explored deep reinforcement learning (RL)-based models with far fewer parameters than those typical for supervised and self-learning models. Such an RL model can be effectively trained on a small dataset, and yet be generalizable and interpretable. In this work, we design an RL model to perform multichannel PCCT scan denoising. Because a reliable reward function is crucial for optimizing the RL model, we focus on designing a small denoising autoencoder-based reward network to learn the latent representation of full-dose simulated PCCT data and use the reconstruction error to quantify the reward. We also use domain-specific batch normalization for unsupervised domain adaptation with a limited amount of multichannel PCCT data. Our results show that the proposed model achieves excellent denoising results, with a significant potential for clinical and preclinical PCCT denoising.
Cardiac CT is the first line imaging modality for diagnosis of cardiovascular diseases. A major challenge of cardiac CT remains motion artifacts due to fast and/or irregular cardiac dynamics. The existing motion artifact suppression algorithms can be improved based on distribution shifts due to anatomical and pathological variations in patients, protocol and technical changes of scanners, and other factors. In this paper, we construct a diversified dataset consisting of over 1,000 cardiac CT images of diverse features. Also, we provide a pipeline for source-agnostic vessel segmentation and motion artifact scoring. Our results demonstrate the merits of the approach and suggest a guideline for ensuring source-agnostic representativeness of anatomical and pathological imaging biomarkers in cardiac CT applications and beyond.
Creating a viable reconstruction method for Compton scatter tomography remains challenging. Accounting for scatter attenuation when the underlying attenuation map is not known is particularly difficult, and current mathematical approaches to this vary widely. This work explores a novel approach to joint scatter and attenuation image reconstruction, which leverages the underlying structural similarity between the two images and incorporates a deep learning model in an alternating iterative reconstruction scheme. A single-view CT imaging procedure for recording Compton scatter is first described. A joint reconstruction model, which iterates between algebraically reconstructing scatter images and estimating the attenuation via deep learning is then proposed. This model is tested on both a generated dataset of 2D phantom images designed to mimic human tissues as well as a realistically simulated dataset based on real CT images. Testing results yield convergence of the model and decent reconstruction quality, demonstrating the potential principled utilities of this configuration and deep learning approach. The model achieved a structural similarity index measure of at least 0.84 for scatter and 0.89 for attenuation reconstructions with the realistically simulated dataset. The iterative, deep learning approach outlined in this work shows potential for future efficient medical imaging procedures, reconstructing images with limited scatter information.
Photoacoustic tomography seeks to reconstruct an acoustic initial pressure distribution from the measurement of the ultrasound waveforms. Conventional methods assume a-prior knowledge of the sound speed distribution, which practically is unknown. One way to circumvent the issue is to simultaneously reconstruct both the acoustic initial pressure and speed. In this article, we develop a novel data-driven method that integrates an advanced deep neural network through model-based iteration. The image of the initial pressure is significantly improved in our numerical simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.