The quantity and variety of CT imaging data are essential components for effective AI-model training. However, the availability of high-quality CT images for organ segmentation is quite constrained, and the AI-based organ segmentation could be impacted by the varying intensity of contrast agents. Therefore, to improve the robustness of the segmentation both with and without a contrast agent, as well as to solve the data shortage issue, we proposed a multi-planar UNet with an augmented contrast-boosting technique. Any program employing the proposed method may see greater benefits from reducing the burden of large-scale dataset preparation, improving the AI-model training efficiency.
Three dimensional CT images have enabled dentists to visualize the overall anatomic structure of teeth and jawbones. Dental cone-beam CT(CBCT) images include a lot more information as compared to panoramic radiographs, but it is hard to assess overall teeth and jaw structures at a single glance. Although panoramic images facilitate the evaluation process of overall anatomic structures, they may include geometric distortions, blurs, and superimposition of multiple structures due to the spine and ghost effects. Some image viewers have a cut-viewing function that enables orthogonal viewing along the user-set dental arch. However, the orthogonal viewing looks different from the panoramic image physically taken by a panoramic scanner. To make more convenient use of CBCT images, we have developed an approach to synthesize panoramic images from the CBCT images. In the synthesis of the panoramic images, we removed the ghost and spine superimposition to improve the visibility of the synthesized panoramic images.
The panoramic image synthesis has been done in three steps. At first, we extracted the panoramic dental arch from the CBCT images. At the next step, we removed the ghost-inducing bone parts from the CBCT images. Lastly, we synthesized panoramic images by stitching thousands of partial-view panoramic projection images.
With the advantage of the synthesized panoramic image, the additional panoramic scan is not further necessary, and undesirable features of conventional panoramic radiographs (e.g., ghost artifacts or cervical spine superimposition) can be entirely removed.
In this study, we present a deep learning approach for denoising of ultra-low-dose chest CT by combining a low-dose simulation and convolutional neural network (CNN). A total of 18,456 anonymized regular-dose chest CT images were used for training of the CNN. The training CT images were fed into the low-dose simulation tool to generate a paired set of simulated low-dose CT and synthetic low-dose noise. A modified U-net model with 4×4 kernel size and five layers was trained with these paired datasets to predict the low-dose noise from the given low-dose CT image. Independent 10 ultra-low-dose chest CT scans at 120 kVp and 5 mAs were used for testing the denoising performance of the trained Unet. Denoised CT images were obtained by subtracting the predicted noise image from ultra-low-dose chest CT images. We evaluated the image quality by measuring noise standard deviation of soft tissue and with visual assessment of bronchial wall, lung fissure, and soft tissue. For comparison, the image quality was assessed on FBP, VEO, and deep learning-denoised FBP images. The visual assessment made with 4 points scale were 1.0, 3.4 and 4.0 in FBP, VEO, and deep learning-denoised FBP images. Image noise of soft tissue was 101±28HU, 20±5HU, 28±10HU in FBP, VEO, deep learning-denoised images.
This study presents a novel deep learning approach for denoising of ultra-low-dose cardiac CT angiography (CCTA) by combining a low-dose simulation technique and convolutional neural network (CNN). Twenty-five CT angiography (CTA) scans acquired with ECG gating (70 – 100 kVp, 100 – 200 mAs) were fed into the low-dose simulation tool to generate a paired set of simulated low-dose CTA and synthetic low-dose noise. A modified U-net model with 4x4 kernel size and five layers was trained with these paired dataset to predict the low-dose noise from the given low-dose CCTA image. For generation of simulation low-dose CTA, differing level of low-dose conditions from 10% to 2.5% were applied. Independent 5 ultra-low-dose CTA scans (70 – 100 kVp, 4% dose of full-dose) with ECG gating were used for testing the denoising performance of the trained U-net. A denoised CCTA image was obtained by subtracting the predicted noise image by the U-net from the ultra-low-dose CCTA images. The performance was evaluated quantitatively in terms of noise measurements in ascending aorta, left/right ventricles, and qualitatively by comparing the noise pattern and image quality. Average of image noise in ascending aorta, left/right ventricles were 149±41HU, 200±15HU, 164±21HU in ultra-low-dose, and 46±14HU, 66±9HU, 55±12HU in deep learning-denoised images. The overall noise was significantly reduced by 70%. The noise pattern was indistinguishable from that of real CCTA image, and the image quality of denoised CCTA images was much higher than that of ultra-lowdose CCTA images.
Effective elimination of unique CT noise pattern while preserving adequate image quality is crucial in reducing radiation dose to ultra-low-dose level in CT imaging practice. In this study, we present a novel Deep Learning-enable Iterative Reconstruction (Deep IR) approach for CT denoising which incorporate a synthetic sinogram-based noise simulation technique for training of Convolutional Neural Network (CNN). Regular dose CT images from 25 patients were used from Seoul National University Hospital. The CT scans were performed at 140 kVp, 100 mAs, and reconstructed with standard FBP technique using B60f kernel. Among them, 20 patients were randomly selected as a training set and the rest 5 patients were used for a test set. We applied a re-projection technique to create a synthetic sinogram from the DICOM CT image, and then a simulated noise sinogram was generated to match the noise level of 10mAs according to Poisson statistic and the system noise model of the given scanner (Somatom Sensation 16, Siemens). We added the simulated noise sinogram to the re-projected synthetic sinogram to generate a simulated sinogram of ultra-low dose scan. We also created the simulated ultra-low-dose CT image by applying FBP reconstruction of the simulated noise sinogram with B60f kernel. A CNN model was created using a TensorFlow framework to have 10 consecutive convolution layer and activation layer. The CNN was trained to learn the noise in sinogram domain: the simulated noisy sinogram of ultra-low dose scan was fed into its input nodes with the output node being fed by the simulated noise sinogram. At test phase, the noise sinogram from the CNN output was reconstructed with using B60f kernel to create a noise CT image, which in turn was subtracted from the simulated ultra-low-dose CT image to produce a Deep IR CT image. The performance was evaluated quantitatively in terms of structural similarity (SSIM) index, peak signal-to-noise ratio (PSNR) and noise level measurement and qualitatively in CT image by comparing the noise pattern and image quality. Compared to low-dose image, denoising image of the SSIM and the PSNR were improved from 0.75 to 0.80, 28.61db to 32.16 respectively. The noise level of denoising image was reduced to an average of 56 % of that of low-dose image. The noise pattern in reconstructed noise CT was indistinguishable from that of real CT images, and the image quality of Deep IR CT image was overall much higher than that of simulated ultra-low-dose CT.
Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert’s manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.