Low performing pixels (LPP)/missing/bad channels in CT detectors, if left uncorrected cause ring and streak artifacts, structured non-uniformities, and make the reconstructed image unusable for diagnostic purposes. Many image processing methods are proposed to correct the ring and streak artifacts in reconstructed images, but it is more appropriate to correct the LPPs in sinogram domain as the errors are localized. Although Generative Adversarial Networks based sinogram inpainting methods have shown promise in interpolating the missing sinogram information, it is often observed that the reconstructed images lack diagnostic value especially in visualizing soft tissues with certain window width and level. In this work, we propose a deep-learning based solution that operates on the sinogram data to remove the distortions cause by LPPs. This method leverages the CT system geometry (including conjugate ray information) to learn the anatomy aware interpolation in the sinogram domain. We demonstrated the efficacy of the proposed method using data acquired on GE RevACT multi-slice CT system with flat-panel detector. We have considered 46 axial head scans out of them 42 sets are used for training and the remaining 4 sets for validation/testing. We have simulated isolated LPPs accounting for 10% of total channels in the central panel of the detector and corrected them using the proposed approach. Detailed statistical analysis has revealed that, approximately 5dB improvement in SNR is observed in both sinogram and reconstruction domain as compared to classical bicubic and Lagrange interpolation methods. Also, with reduction in ring and streak artifacts, the perceptual image quality is improved across all the test images.
In Computed Tomography (CT), problems like low to high dose image conversion, denoising and super-resolution using deep regression network have gained a lot of importance. This led to the study of multiple denoising approaches with these networks, as non-linear characteristics of the network often changes the noise pattern. Noise2Noise (N2N) and similar studies have shown the impact of these networks, where one can use noisy image pairs and exploit the uncorrelated nature of noise to generate denoised images. In this paper, we study the behaviour of regression network for a domain translation between one energy image to another energy and its impact on noise. Inter-kVp translation leads to change in attenuation values in CT image. A design of experiments is set up using CATSim phantom with different tissue types at varying levels of density and proportions of materials, for heart, soft tissue, fat, calcification and bone. The intent is to understand the impact of inter and intra-tissue dynamic ranges, as this network learns intensity translation of the image between 2 kVp levels and noise characteristics. The results demonstrate ability of regression network to change intensty values from low kVp image to high kVp image. It also shows impact on noise level in a tissue is proportional to (a) intra-tissue variability and (b) the desired mean shift of the tissue between two energy levels. Simultaneously, there is a marked change in the artifacts and resultant image quality, as expected, through the learning method.
Images produced by CT systems with larger detector pixels often suffer from lower z resolution due to their wider slice sensitivity profile (SSP). Reducing the effect of SSP blur will result in resolution of finer structures and enables better clinical diagnosis. Hardware solutions such as dicing the detector cells smaller or dynami- cally deflecting the X-ray focal spot do exist to improve the resolution, but they are expensive. In the past, algorithmic solutions like deconvolution techniques also have been used to reduce the SSP blur. These model- based approaches are iterative in nature and are time consuming. Recently, supervised data-driven deep learning methods have become popular in computer vision for deblurring/deconvolution applications. Though most of these methods need corresponding pairs of blurred (LR) and sharp (HR) images, they are non-iterative during inference and hence are computationally efficient. However, unlike the model-based approaches, these methods do not explicitly model the physics of degradation. In this work, we propose Resolution Amelioration using Machine Adaptive Network (RAMAN), a self-supervised deep learning framework, that explicitly uses best of both learning and model based approaches. The framework explicitly accounts for the physics of degradation and appropriately regularizes the learning process. Also, in contrary to supervised deblurring methods that need paired LR and HR images, the RAMAN framework requires only LR images and SSP information for training, making it self-supervised. Validation of proposed framework with images obtained from larger detector systems shows marked improvement in image sharpness while maintaining HU integrity.
CT systems with large detector size suffer from lower z-resolution leading to pixelated images and inability to detect small structures thus adversely impacting the diagnosis and screening. Overlap reconstruction can partially reduce the stair-step artifacts but does not improve the effect of wider slice sensitivity profile (SSP) and thus continues to have reduced visibility of smaller structures. In this work, we propose a supervised deep learning method for z-resolution enhancement such that (a) the effective SSP of resulting image is reduced, (b) quantitative values of tissue (CT numbers) and tissue-contrast are preserved; (c) very limited noise enhancement and (d) improved tissue interface in bone/soft tissue. The proposed method devises a super resolution (SURE) network which is trained to map the low resolution (LR) slices to the corresponding high resolution (HR) slices. A 2D network is trained with sagittal and coronal slices with the LR-HR pair sets. The training is performed using ground truth HR slices obtained from high end systems, and the corresponding LR slices are synthesized by either using retro reconstruction with higher slice thickness and spacing or through averaging of slices in z-direction from HR images. The network is trained with both these types of images with helical acquisition volumes from a range of scanners. Qualitative and quantitative analysis is done on the predicted HR images and compared with the original HR images. FWHM for SSP of the predicted HR images reduced from ~0.98 to ~0.73, when the target was 0.64, thus improving the real z-resolution. HU distribution of different tissue types also showed stability in terms of mean value. Noise measured through standard deviation was slightly higher than the LR image but lower than that of original HR images. PSNR also showed consistent improvement on all the cases across 3 different systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.