Poster + Paper
4 April 2022 Fast 3D imaging via deep learning for deep inspiration breath-hold lung radiotherapy
Yang Lei, Zhen Tian, Tonghe Wang, Marian Axente, Justin Roper, Kristin Higgins, Jeffrey D. Bradley, Tian Liu, Xiaofeng Yang
Author Affiliations +
Conference Poster
Abstract
Deep inspiration breath hold (DIBH) is a common method for managing respiratory motion in lung radiotherapy (RT) and has shown benefit for significant reduction of cardiovascular and pulmonary toxicity. Cone beam CT (CBCT) is used to capture 3D images of the patient in position for daily treatment, and multiple CBCT scans may be needed to fine tune the patient setup. Furthermore, multiple breath holds are often needed for a single CBCT acquisition (typically ~1 min). Because of differences in the tumor and healthy tissue positions between consecutive DIBHs, the inconsistent anatomy in the 2D projection images degrades the quality of the reconstructed CBCT. Moreover, multiple breath holds during the initial setup will increase the probability that a patient will become tired and less able to reproduce consistent DIBH during treatment delivery when it matters most. To address this important clinical issue, a proof-of-concept study was designed using a novel deep learning-based method to derive a 3D volumetric image from two perpendicular 2D projection images (kV-MV pair), thereby reducing the number of DIBHs needed for imaging. The proposed method – implemented with a feature matching network – derives feature maps from the 2D projections and re-aligns them to their projection angle in a Cartesian coordinate system. The 2D feature maps are rendered in 3D space via depth learning by the feature matching network. Finally, the 3D volume is derived from the 3D feature map. We conducted a simulation study using 10 patient cases (110 CT images). Each patient had a 4D CT scan that was split into 10 phase bins for motion evaluation and a DIBH CT scan during simulation and later received DIBH lung RT at our institution. Ray tracing through each phase-binned CT was used to simulate a 2D kV projection at gantry angle 0° and a MV projection at gantry angle 90°. Orthogonal 2D projections (200 projections) from 10 phases were used to train the network to be patient specific while the DIBH CT was held out for testing. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM) achieved by our method are 93.1 HU and 92.5 HU, 21.7 dB and 15.6 dB, and 0.87 and 0.74 within body and tumor ROI, respectively. These results demonstrate the feasibility and efficacy of our proposed method for 3D imaging from two orthogonal kV and MV 2D projections, which provides a potential solution of fast 3D imaging for daily treatment setup of breath-hold lung RT to ensure treatment accuracy and effectiveness.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yang Lei, Zhen Tian, Tonghe Wang, Marian Axente, Justin Roper, Kristin Higgins, Jeffrey D. Bradley, Tian Liu, and Xiaofeng Yang "Fast 3D imaging via deep learning for deep inspiration breath-hold lung radiotherapy", Proc. SPIE 12034, Medical Imaging 2022: Image-Guided Procedures, Robotic Interventions, and Modeling, 120342N (4 April 2022); https://doi.org/10.1117/12.2611813
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D image processing

Computed tomography

Tumors

Lung

Stereoscopy

4D CT imaging

Radiotherapy

Back to Top