Stereotactic radiosurgery (SRS) is widely used to obliterate arteriovenous malformations (AVMs). Its performance relies on the accuracy of delineating the target AVM. Manual segmentation during a framed SRS procedure is timeconsuming and subject to inter- and intra-observer variation. Therefore, it is important to develop an automatic segmentation method to delineate the AVM target from CT images. In this study, we retrospectively investigated 80 patients who were treated with SRS. Ground truth was manually generated by an experienced physician using both DSA and CT images. A fast region proposal network was first trained to propose a bounding box that contains the AVM lesion for detection. The bounding box was then used to guide image patch sampling process for V-Net training. In the testing stage, possible AVM locations were first proposed by the region proposal network. Subsequently, V-Net was used for the final label prediction. Both the region proposal network and V-Net were trained using 60 patients and tested using 20 patients. The mean Dice similarity coefficient (DSC) was calculated to evaluate the accuracy of the proposed method. The automatic contours were in very good agreement to the ground truth contours with an average DSC < 0.85.
We developed a machine-learning-based method generate good quality low dose CT using a residual block concept and a self-attention strategy with a cycle-consistent adversarial network framework. A fully convolution neural network with residual blocks and attention gates is used in the generator to enable end-to-end transformation. We have collected CT images from 30 patients treated with frameless brain stereotactic radiosurgery (SRS) for this study. These full dose images were used to generate projection data, which were then added with noise to simulate the low mAs scanning scenario. Low dose CT images were reconstructed from this noise-contaminated projection data, and were fed into our network along with the original full dose CT images for training. The performance of our network was evaluated by quantitatively comparing the high quality CT images generated by our method with the original full dose images. When mAs is reduced to 0.5% of the original CT scan, the mean square error of the CT images obtained by our method is ~1.6%, with respective to the original full dose images. The proposed method successfully improved the noise, CNR and non-uniformity level to be close to those of full dose CT images, and outperforms a state-of-art iterative reconstruction method. Dosimetric studies shows that the average differences of DVH metrics are less than 0.1 Gy (p>0.05). These quantitative results strongly indicate that the denoised low dose CT images using our method maintains image accuracy and quality, and are accurate enough for dose calculation in current CT simulation of brain SRS treatment. This study also demonstrates the great potential for low dose CT in the process of simulation and treatment planning.
KEYWORDS: Computed tomography, Image quality, X-ray computed tomography, Radiotherapy, Computer simulations, CT reconstruction, Monte Carlo methods, Data modeling, Signal to noise ratio, Medical imaging
Low-dose computed tomography (CT) is desirable for treatment planning and simulation in radiation therapy. Multiple rescanning and replanning during the treatment course with a smaller amount of dose than a single conventional full-dose CT simulation is a crucial step in adaptive radiation therapy. We developed a machine learning-based method to improve image quality of low-dose CT for radiation therapy treatment simulation. We used a residual block concept and a self-attention strategy with a cycle-consistent adversarial network framework. A fully convolution neural network with residual blocks and attention gates (AGs) was used in the generator to enable end-to-end transformation. We have collected CT images from 30 patients treated with frameless brain stereotactic radiosurgery (SRS) for this study. These full-dose images were used to generate projection data, which were then added with noise to simulate the low-mAs scanning scenario. Low-dose CT images were reconstructed from this noise-contaminated projection data and were fed into our network along with the original full-dose CT images for training. The performance of our network was evaluated by quantitatively comparing the high-quality CT images generated by our method with the original full-dose images. When mAs is reduced to 0.5% of the original CT scan, the mean square error of the CT images obtained by our method is ∼1.6 % , with respect to the original full-dose images. The proposed method successfully improved the noise, contract-to-noise ratio, and nonuniformity level to be close to those of full-dose CT images and outperforms a state-of-the-art iterative reconstruction method. Dosimetric studies show that the average differences of dose-volume histogram metrics are <0.1 Gy (p > 0.05). These quantitative results strongly indicate that the denoised low-dose CT images using our method maintains image accuracy and quality and are accurate enough for dose calculation in current CT simulation of brain SRS treatment. We also demonstrate the great potential for low-dose CT in the process of simulation and treatment planning.
Segmentation of the prostate in 3D CT images is a crucial step in treatment planning and procedure guidance such as brachytherapy and radiotherapy. However, manual segmentation of the prostate is very time-consuming and depends on the experience of the clinician. On the contrary, automated prostate segmentation is more helpful in practice, whereas the task is very challenging due to low soft-tissue contrast in CT images. In this paper, we propose a 3D deeply supervised fully-convolutional-network (FCN) with dilated convolution kernel to automatically segment prostate in CT images. A deep supervision strategy could acquire more powerful discriminative capability and accelerate the optimization convergence in training stage, while concatenating the dilated convolution enlarges the receptive field to extract more global contextual information for accurate prostate segmentation. The presented method was evaluated using 15 prostate CT images and obtained a mean Dice similarity coefficient (DSC) of 0.85±0.04 and mean surface distance (MSD) of 1.92±0.46 mm. The experimental results show that our approach yields accurate CT prostate segmentation, which can be employed for the prostate-cancer treatment planning of brachytherapy and external beam radiotherapy.
We propose to integrate patch-based anatomical signatures and an auto-context model into a machine learning framework to iteratively segment MRI into air, soft tissue and bone. The proposed segmentation of MRIs consists of a training stage and a segmentation stage. During the training stage, patch-based anatomical features were extracted from the aligned MRI-CT training images, and the most informative features were identified to train a serious of classification forests with auto-context model. During the segmentation stage, we extracted the selected features from the MRI and fed them into the well-trained forests for MRI segmentation. Our classified results were compared with reference CTs to quantitatively evaluate segmentation accuracy using Dice similarity coefficients (DSC). This segmentation technique was validated with a clinical study of 11 patients with both MR and CT images of the brain. The DSC for air, bone and soft-tissue were 97.79±0.76%, 93.32±2.35% and 84.49±5.50%. The corresponding CT Hounsfield units (HU) can be assigned to three segmented masks (air, soft tissue and bone) for generating the synthetic CT (SCT), which demonstrates the proposed method has promising potential in generating synthetic CT from MRI for MRI-only photon or proton radiotherapy treatment planning.
KEYWORDS: Image segmentation, Computed tomography, Angiography, 3D acquisition, 3D image processing, Medical imaging, Magnetic resonance imaging, X-ray computed tomography, Binary data, Medical physics
We propose a learning-based method to automatically segment arteriovenous malformations (AVM) target volume from computed tomography (CT) in stereotactic radiosurgery (SRS). A deeply supervised 3D V-Net is introduced to enable end-to-end segmentation. Deep supervision mechanism is integrated into the hidden layers to overcome the optimization difficulties when training such a network with limited training data. The probability map of new AVM contour is generated by the well-trained network. To evaluate the proposed method, we retrospectively investigate 30 AVM patients treated by SRS. For each patient, both digital subtraction angiography (DSA) and CT with contrast had been acquired. Using our proposed method, the AVM contours are generated solely based on contrast CT images, and are compared with the AVM contours delineated from DSA by physicians as ground truth. The average centroid distance, volume difference and DSC value among all 30 patients are 0.83±0.91mm, -0.01±0.79 and 0.84±0.09, which indicates that the proposed method is able to generate AVM target contour with around 1mm error in displacement, 1cc error in volume size and 84% overlapping compared with ground truth. The proposed method has great potential in eliminating DSA acquisition and developing a solely CT-based treatment planning workflow for AVM SRS treatment.
We propose a method to automatically segment multiple organs at risk (OARs) from routinely-acquired thorax CT images using generative adversarial network (GAN). Multi-label U-Net was introduced in generator to enable end-to-end segmentation. Esophagus and spinal cord location information were used to train the GAN in specific regions of interest (ROI). The probability maps of new CT thorax multi-organ were generated by the well-trained network and fused to reconstruct the final contour. This proposed algorithm was evaluated using 20 patients' data with thorax CT images and manual contours. The mean Dice similarity coefficient (DSC) for esophagus, heart, left lung, right lung and spinal cord was 0.73±0.04, 0.85±0.02, 0.96±0.01, 0.97±0.02 and 0.88±0.03. This novel deep-learning-based approach with the GAN strategy can automatically and accurately segment multiple OARs in thorax CT images, which could be a useful tool to improve the efficiency of the lung radiotherapy treatment planning.
Prostate segmentation of MR volumes is a very important task for treatment planning and image-guided brachytherapy and radiotherapy. Manual delineation of prostate in MR image is very time-consuming and depends on the subjective experience of the physicians. On the other hand, automatic prostate segmentation becomes a reasonable and attractive choice for its speed, even though the task is very challenging because of inhomogeneous intensity and variability of prostate appearance and shape. In this paper, we propose a method to automatically segment MR prostate image based on 3D deeply supervised FCN with concatenated atrous convolution (3D DSA-FCN). More discriminative features provide explicit convergence acceleration in training stage using straightforward dense predictions as deep supervision and the concatenated atrous convolution extract more global contextual information for accurate predictions. The presented method was evaluated on the internal dataset comprising 15 T2-weighted prostate MR volumes from Winship Cancer Institute and obtained a mean Dice similarity coefficient (DSC) of 0.852±0.031, 95% Hausdorff distance (95%HD) 7.189±1.953 mm and mean surface distance (MSD) of 1.597±0.360 mm. The experimental results show that our 3D DSA-FCN could yield satisfied MR prostate segmentation, which can be used for image-guided radiotherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.