Positron emission tomography (PET) is an advanced nuclear medicine imaging technique widely used in clinical diagnostics such as neurology and oncology. In PET image reconstruction, the widespread adoption of deep learning is attributed to its potent feature extraction capabilities. However, the challenge lies in ensuring that the employed network is interpretable and rational. Additionally, addressing the intricate issue of achieving superior results with a smaller training set remains a formidable task. In this paper, we propose a novel alternating learning dual-domain reconstruction algorithm. This method combines the likelihood function based on the PET imaging model with a learnable dual-domain regularization term as a composite objective. The objective function is minimized through alternating iterations to obtain reconstructed activity image and denoised sinogram. The iterative process enhances the convergence speed by integrating residual structures, and the assurance of result convergence is facilitated through the imposition of judgment conditions. Experimental results demonstrate that our method surpasses OSEM and DeepPET in terms of SSIM and PSNR.
Dynamic positron emission tomography (PET) imaging can provide information about metabolic changes over time, and is widely used in clinical diagnosis and cancer treatment. However, the existing deep learning methods for PET image reconstruction mainly focus on the static mapping paradigm between sinogram and radioactivity concentration distribution, which ignores the inherent dynamic activation process of tracers. In this paper, we establish a physiological model based deep learning framework for dynamic PET image reconstruction using deep physiology prior. First, the objective functions of our physiological model are combined with static mapping and the dynamic activation process of tracers. Then, a data-driven Adaptive Kalman Inspired Network (AKIN) is adopted to solve the proposed objective functions. Specifically, the AKIN consists of three components: a Prediction Net is employed for directly predicting the prior estimation; a Projection Net is employed for predicting the current estimation of the observations based on the prior estimation, furthermore, a Kalman Gain Net (KNet) is employed for adaptively learning the gain coefficient. The experiment of simulation data demonstrates that the proposed method has substantial noise reduction in temporal and spatial domains, outperforming other methods like maximum likelihood expectation maximization, kernel expectation maximization method and DeepPET.
PET image reconstruction direct from list-mode data can eliminate the storage of empty sinogram bins, preserve all the precision and accuracy of the large amount data of PET scanners. However, the traditional list-mode reconstruction methods, such as list-mode ML-EM algorithm always suffer from high level of noise due to the ill-conditioning of the PET reconstruction problem. In this paper, we proposed a novel deep learning based method for list-mode reconstruction. We first adopted a domain transfer function to convert the sensor domain data to image domain, then the Residual Attention Dense U-net was used to learn the reconstruction.The proposed RADU-net is based on the RDU-net structure, where the Attention Gate module is integrated into.Instead of concatenating the left side feature to the right side directly, the attention gated module takes advantage of the high level feature to guide the low level feature to realize the attention mechanism before doing the concatenation. Realistic PET acquisitions of 25 digital PET brain phantoms were simulated, generating noisy list-mode data, used for evaluation. Quantification results show that the proposed listmodeCNN can outperform the U-net, list-mode ML-EM as well as TV regularized ML-EM in terms of SSIM and PSNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.