Lung cancer continues to be the most common type of cancer worldwide. In radiation therapy, high doses of radiation are used to destroy tumors. Adapting radiotherapy to breathing patterns has always been a major concern when dealing with tumors in thoracic or upper abdomen regions. Precise estimation of respiratory signal ensures least damage to healthy tissues surrounding the tumor as well as misrepresentation of the target location. The main objective of this work is to develop a method to extract the breathing signal directly from a given sequence of cone-beam computed tomography (CBCT) projections without depending on any external devices such as spirometer, pressure belt, or implanted infrared markers. The proposed method implements optical flow to track the movement of pixels between each pair of successive CBCT projection images through the entire set of projections. As the optical flow operation results in a high dimensional dataset, dimensionality reduction using linear and kernel based principal component analysis (PCA) are applied on the optical flow dataset to transform it into a lower-dimensional dataset ensuring that only the most distinctive components are present. The proposed method was tested on XCAT phantom datasets1 simulating cases of regular and irregular breathing patterns and cases where the diaphragm was partially visible in certain projection images. The extracted breathing signal using the proposed method was compared to the ground truth signal. Results showed that the extracted signal correlated well with ground truth signal with a mean phase shift not exceeding 1.5 projection in all cases.
A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.