In this study we present a novel contrast-medium anisotropy-aware TTV (Cute-TTV) model to reflect intrinsic sparsity configurations of a cerebral perfusion Computed Tomography (PCT) object. We also propose a PCT reconstruction scheme via the Cute-TTV model to improve the performance of PCT reconstructions in the weak radiation tasks (referred as CuteTTV-RECON). An efficient optimization algorithm is developed for the CuteTTV-RECON. Preliminary simulation studies demonstrate that it can achieve significant improvements over existing state-of-the-art methods in terms of artifacts suppression, structures preservation and parametric maps accuracy with weak radiation.
KEYWORDS: Computed tomography, Dual energy imaging, Gold, Convolution, Bone, Convolutional neural networks, Signal attenuation, Medical imaging, Surgery, Biological research
Dual energy computed tomography (DECT) usually scans the object twice using different energy spectrum, and then DECT is able to get two unprecedented material decompositions by directly performing signal decomposition. In general, one is the water equivalent fraction and other is the bone equivalent fraction. It is noted that the material decomposition often depends on two or more different energy spectrum. In this study, we present a deep learning-based framework to obtain basic material images directly form single energy CT images via cascade deep convolutional neural networks (CD-ConvNet). We denote this imaging procedure as pseudo DECT imaging. The CD-ConvNet is designed to learn the non-linear mapping from the measured energy-specific CT images to the desired basic material decomposition images. Specifically, the output of the former convolutional neural networks (ConvNet) in the CD-ConvNet is used as part of inputs for the following ConvNet to produce high quality material decomposition images. Clinical patient data was used to validate and evaluate the performance of the presented CD-ConvNet. Experimental results demonstrate that the presented CD-ConvNet can yield qualitatively and quantitatively accurate results when compared against gold standard. We conclude that the presented CD-ConvNet can help to improve research utility of CT in quantitative imaging, especially in single energy CT.
Computed Tomography (CT) is one of the most important medical imaging modality. CT images can be used to assist in the detection and diagnosis of lesions and to facilitate follow-up treatment. However, CT images are vulnerable to noise. Actually, there are two major source intrinsically causing the CT data noise, i.e., the X-ray photo statistics and the electronic noise background. Therefore, it is necessary to doing image quality assessment (IQA) in CT imaging before diagnosis and treatment. Most of existing CT images IQA methods are based on human observer study. However, these methods are impractical in clinical for their complex and time-consuming. In this paper, we presented a blind CT image quality assessment via deep learning strategy. A database of 1500 CT images is constructed, containing 300 high-quality images and 1200 corresponding noisy images. Specifically, the high-quality images were used to simulate the corresponding noisy images at four different doses. Then, the images are scored by the experienced radiologists by the following attributes: image noise, artifacts, edge and structure, overall image quality, and tumor size and boundary estimation with five-point scale. We trained a network for learning the non-liner map from CT images to subjective evaluation scores. Then, we load the pre-trained model to yield predicted score from the test image. To demonstrate the performance of the deep learning network in IQA, correlation coefficients: Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are utilized. And the experimental result demonstrate that the presented deep learning based IQA strategy can be used in the CT image quality assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.