Respiratory-Correlated cone beam computed tomography (4D-CBCT) is an emerging image-guided radiation therapy (IGRT) technique that is used to account for the uncertainties caused by respiratory-induced motion in the radiotherapy treatment of tumors in thoracic and upper-abdomen regions. In 4D-CBCT, projections are sorted into bins based on their respiratory phase and a 3D image is reconstructed from each bin. However, the quality of the resulting 4D-CBCT images is limited by the streaking artifacts that result from having an insufficient number of projections in each bin. In this work, an interpolation method based on Convolutional Neural Networks (CNN) is proposed to generate new in-between projections to increase the overall number of projections used in 4D-CBCT reconstruction. Projections simulated using XCAT phantom were used to assess the proposed method. The interpolated projections using the proposed method were compared to the corresponding original projections by calculating the peak-signal-to-noise ratio (PSNR), root mean square error (RMSE), and structural similarity index measurement (SSIM). Moreover, the results of the proposed method were compared to the results of existing standard interpolation methods, namely, linear, spline, and registration-based methods. The interpolated projections using the proposed method had an average PSNR, RMSE, and SSIM of 35.939, 4.115, and 0.968, respectively. Moreover, the results achieved by the proposed method surpassed the results achieved by the existing interpolation methods tested on the same dataset. In summary, this work demonstrates the feasibility of using CNN-based methods in generating in-between projections and shows a potential advantage to 4D-CBCT reconstruction.
Lung cancer continues to be the most common type of cancer worldwide. In radiation therapy, high doses of radiation are used to destroy tumors. Adapting radiotherapy to breathing patterns has always been a major concern when dealing with tumors in thoracic or upper abdomen regions. Precise estimation of respiratory signal ensures least damage to healthy tissues surrounding the tumor as well as misrepresentation of the target location. The main objective of this work is to develop a method to extract the breathing signal directly from a given sequence of cone-beam computed tomography (CBCT) projections without depending on any external devices such as spirometer, pressure belt, or implanted infrared markers. The proposed method implements optical flow to track the movement of pixels between each pair of successive CBCT projection images through the entire set of projections. As the optical flow operation results in a high dimensional dataset, dimensionality reduction using linear and kernel based principal component analysis (PCA) are applied on the optical flow dataset to transform it into a lower-dimensional dataset ensuring that only the most distinctive components are present. The proposed method was tested on XCAT phantom datasets1 simulating cases of regular and irregular breathing patterns and cases where the diaphragm was partially visible in certain projection images. The extracted breathing signal using the proposed method was compared to the ground truth signal. Results showed that the extracted signal correlated well with ground truth signal with a mean phase shift not exceeding 1.5 projection in all cases.
A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.