In proton therapy, quality assurance (QA) CT is often acquired along the treatment course to evaluate the dosimetric change caused by the patient anatomy variation and, if needed, replan the treatment on the new anatomy, particularly for Headand-Neck (HN) cancer which often involves many organs-at-risks (OARs) in close proximity to the targets and has a high replan rate around 45.6% after week 4. For this purpose, it is required to contour the OARs on all the acquired QA CT sets for dose-volume-histogram analysis and deform the QA CT to the planning CT to evaluate the anatomy variation and the accumulated dose over the treatment course. To facilitate this process, in this study, we have proposed deep learning based method for groupwise HN QACT deformable image registration to deform mutual image deformation between planning CT and QA CT in a single shot. A total of 30 patients’ datasets with one planning CT and 3 QA CT throughout the treatment were collected. The network was trained to register the CT images in both directions, namely registering the planning CT to each QACT and each QACT to the planning CT. The proposed mutual image registration framework can greatly improve the image registration accuracy as compared to the initial rigid image registration. The mean absolute error (MAE) and structural similarity index (SSIM) were calculated to evaluate the performance of the trained network. On average, The MAE 133±29 HU and 88±15 HU for the rigid and the proposed registration, respectively. The SSIM was on average 0.92±0.01 and 0.94±0.01 for the rigid and the proposed registration, respectively.
In this work, we propose a convolutional vision transformer V-net (CVT-Vnet) for multi-organ segmentation in 3- dimensional CT images of head and neck cancer patients for radiotherapy treatment planning. Organs include brain-stem, chiasm, mandible, optic nerve (left and right), parotid (left and right), and submandibular (left and right). The proposed CVT-Vnet has a U-shape encoder-decoder architecture. A CVT is firstly deployed as the encoder to encourage global characteristics which still preserve precise local details. And a convolutional decoder is utilized to assemble the segmentation from the features learned by the CVT. We evaluated the network using a dataset of 32 patients undergoing radiotherapy treatment. We present quantitative evaluation of the performance of our proposed CVT-Vnet, in terms of segmentation volume similarity (Dice score, sensitivity , precision and absolution percentage volume difference (AVD)) and surface similarity (Hausdorff distance (HD), mean surface distance (MSD) and residual mean square distance (RMSD)), using the physicians’ manual contour as the ground truth. The volume similarities averaged over all organs were 0.79 as Dice score, 0.83 as sensitivity and 0.78 as precision. The average surface similarities were 13.41mm as HD, 0.39mm as MSD and 1.01mm as RMSD. The proposed network performed significantly better than Vnet and DV-net, which are two state-of-the-art methods. The proposed CVT-Vnet can be a promising tool of multi-organ delineation for head and neck radiotherapy treatment planning.
Typical radiation therapy for head-and-neck cancer patients lasts for more than a month. Anatomical variations often occur along the treatment course due to the tumor shrinkage and weight loss, particularly for head and neck (HN) cancer patients. To maintain the accuracy of radiotherapy beam delivery, weekly quality assurance (QA) CT is sometime acquired to monitor patients’ anatomical changes, and re-plan the treatment if needed. However, the re-plan is a labor intensive and time-consuming process, thus, decisions of re-plan are always made cautiously. In this study, we aim to develop a deep learning-based method for automated segmentation of multi-organ from CT head and neck (HN) images to rapidly evaluate the anatomical variations. Our proposed method, named detecting and boosting network, consists of one pre-trained fully convolutional one stage objection detector (FCOS) and two learnable subnetworks, i.e., hierarchical block and mask head. FCOS is used to extract informative features from CT and locate the volume-of-interest (VOIs) of multiple organs. Hierarchical block is used to enhance the feature contrast around organ boundary and thus improve the ability of organ classification. Mask head then segment organ from the refined feature map within the VOIs. We conducted a five-fold cross-validation on 35 patients’ cases who have multiple weekly CT scans (over 100 QACTs) during their radiotherapy. The 11 organs were segmented and compared with manual contours using several segmentation measurements. The mean Dice similarity coefficient (DSC) values of 0.82, 0.82, and 0.81 were achieved along the treatment course for all the organs. These results demonstrate the feasibility and efficacy of our proposed method for multi-OAR segmentation from HN CT, which can be used for rapid evaluate the anatomical variations in HN radiation therapy.
The delineation of target and organs-at-risk (OARs) is a necessary step in radiotherapy treatment planning. The accuracy of the target and OAR contours directly affects the quality of radiotherapy plans. Manual contouring of OARs is the routine procedure at present, which, however, is very time-consuming and requires significant expertise, especially for those head-and-neck (HN) cancer cases, where OARs densely distribute around tumors with complex anatomical structures. In this study, we propose a deep learning-based fully automated delineation method, namely, mask scoring regional convolutional neural network (MS-RCNN), to obtain consistent and reliable OAR contours in HN CT. In the model, MR images were synthesized by a cycle-consistent generative adversarial network given CT images. A backbone network was utilized to extract features from MRI and CT independently. The high bony-structure contrast in CT and soft-tissue contrast in MRI are complementary in nature. Through combining those complementary contrasts, the accuracy of OAR delineation is expected to be improved. Due to the ability of various object detection and classification, ResNet 101 was used as backbone in MS-RCNN. Quantities including Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS) were calculated to evaluate the performance of the proposed method. An average DSC, HD95, MSD and RMS of 0.78 (0.58 - 0.89), 4.88 mm (2.79 mm - 7.46 mm), 1.39 mm (0.69 mm - 1.99 mm), and 2.23 mm (1.30 mm - 3.23 mm), were respectively achieved across all of the 12 OARs by our proposed method. The proposed method is promising in facilitating auto-contouring for radiotherapy treatment planning.
This work presents a learning-based method to synthesize dual energy CT (DECT) images from conventional single energy CT (SECT). The proposed method uses a residual attention generative adversarial network. Residual blocks with attention gates were used to force the model to focus on the difference between DECT maps and SECT images. To evaluate the accuracy of the method, we retrospectively investigated 20 headand-neck cancer patients with both DECT and SECT scans available. The high and low energy CT images acquired from DECT acted as learning targets in the training process for SECT datasets and were evaluated against results from the proposed method using a leave-one-out cross-validation strategy. The synthesized DECT images showed an average mean absolute error around 30 Hounsfield Unit (HU) across the wholebody volume. These results strongly indicate the high accuracy of synthesized DECT image by our machinelearning-based method.
This work aims to develop an automatic multi-organ segmentation approach based on deep learning for head - and- neck region on dual energy CT. The proposed method proposed a Mask scoring R-CNN where comprehensive features are first learnt from two independent pyramid networks and then are combined via deep attention strategy to highlight the informative ones extracted from both two channels of low and high energy CT. To perform multi-organ segmentation and solve the problem of misclassification, a mask scoring subnetwork was integrated into the Mask R-CNN framework to build the correlation between the class of potential detected organ’s ROI and the shape of that organ’s segmentation within that ROI. We trained and tested our model on DECT images from 66 head-and-neck cancer patients with manual contours of 19 organs as training target and ground truth. For large- and mid-sized organs such as brain and parotid, the proposed method successfully achieved average Dice similarity coefficient (DSC) larger than 0.8. For small-sized organs with very low contrast such as chiasm, cochlea, lens and optic nerves, the DSCs ranged between 0.5 and 0.8. With the propose method, using DECT images outperforms using SECT in all 19 organs with statistical significance in DSC (p<0.05). Quantitative results demonstrated the feasibility of the proposed method, the superiority of using DECT to conventional SECT, and the advantage of the proposed R-CNN over FCN. The proposed method has the potential to facilitate the current radiation therapy work flow in treatment planning.
Proton radiation therapy has shown highly conformal distribution of prescribed dose in target with outstanding normal tissue sparing stemming from its steep dose gradient at the distal end of the beam. However, the uncertainty in everyday patient setup can lead to a discrepancy between treatment dose distribution and the planning dose distribution. Conebeam CT (CBCT) can be acquired daily before treatment to evaluate such inter-fraction setup error, while a further evaluation on resulted dose distribution error is currently not available. In this study, we developed a novel deep-learning based method to predict the relative stopping power maps from daily CBCT images to allow for online dose calculation in a step towards adaptive proton radiation therapy. 20 head-and-neck patients with CT and CBCT images are included for training and testing. Our CBCT RSP results were evaluated with RSP maps created from CT images as the ground truth. Among all the 20 patients, the averaged mean absolute error between CT-based and CBCT-based RSP was 0.04±0.02, the averaged mean error was -0.01±0.03 and the averaged normalized correlation coefficient was 0.97±0.01. The proposed method provides sufficiently accurate RSP map generation from CBCT images, possibly allowing for CBCT-guided adaptive treatment planning for proton radiation therapy.
Radiotherapy treatment is based on 3D anatomical models which require accurate organs-at-risk (OARs) delineation. In current clinical practice, the OARs are generally delineated from computed tomography (CT). Because of its superior soft-tissue contrast, magnetic resonance imaging (MRI) information can be introduced to improve the quality of these 3D OAR delineation and therefore the treatment plan itself. Manual segmentation of relevant tissue regions from MR image is a tedious and time-consuming procedure, which is also subject to inter- and intra-observer variation. In this work, we propose to use a 3D Faster R-CNN to automatically detect the locations of head and neck OARs, then utilize an attention U-Net to automatically segment the multiple OARs. We tested our method using 15 head and neck cancer patients. The mean Dice similarity coefficient (DSC) of esophagus, larynx, mandible, oral cavity, left parotid, right parotid, pharynx and spinal cord were 84%, 79%, 85%, 89%, 82%, 81%, 85% and 89%, which demonstrated the segmentation accuracy of the proposed U-Faster-RCNN method. This segmentation technique could be a useful tool to facilitate the routine clinical workflow of H&N radiotherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.