Positron emission tomography (PET) is a widely used molecular imaging technology. However, the inability of conventional PET systems without depth of interaction (DOI) information to precisely locate gamma rays leads to parallax error, which further results in the non-uniform resolution in reconstructed images. The existing methods enable PET systems to acquire DOI information by adding more hardwares, which are generally at the cost of higher prices and degradation of other performances. To overcome these shortcomings, we proposed a novel distance-driven cascade framework containing a bi-directional long short-term memory (Bi-LSTM) module and an encoder-decoder module. Especially, the distance-driven preprocessing was realized by splitting the sinogram into one-dimensional vectors according to radial distance and inputting them sequentially. In this approach, the bins in the same sinogram row had related features,thus were processed at the same time. Furthermore, sinogram rows used the mutual implicit information extracted by Bi-LSTM to achieve better transformation before processed by the encoder-decoder module. To test the proposed method, we conducted the network training and testing on a dataset simulated using the open-source Geant4 toolkit GATE. Compared to the DeepPET, which is a typical PET reconstruction method based on deep learning, our method acquired an obvious promotion in structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) on the test dataset. It proves that our method is superior in perceptual performance and efficiency.
Dual-tracer positron emission tomography (PET) has emerged as a promising nuclear medical imaging technology. It reflects more information about biological functions than only using single tracer, which is of great value for disease research, clinical diagnosis and treatment. Furthermore, image reconstruction and segmentation often are treated as two sequential problems. This paper integrates these two problems into a coherent framework. In recent years, deep learning has attracted growing interests in medical image processing due to its powerful feature extraction ability. The reconstruction of simultaneous dual-tracer activity map based on neural network has shown an advantage of not relying on prior information and interval injection that conventional methods always need. In this paper, we propose a joint deep neural network for reconstruction of two individual tracer and segmentation by clustering of time activity curves. In the part of reconstruction, a classic Generative Adversarial Networks (GAN) called pix2pix is introduced as the basic framework. A 3D U-Net is used as the generator realize reconstruction. Then the time activity curves (TACs) in the reconstruction results are extracted and used for clustering to achieve segmentation. The loss of segmentation is used to guide the training of reconstruction network in return. The performance of segmentation network exceeds that of the previous joint method and the performance of reconstruction network is further improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.