We aimed to investigate the feasibility of predicting pleural invasion or adhesion of lung cancers with dynamic chest radiography (DCR), using a four-dimensional (4D) extended cardiac-torso (XCAT) computational phantom. An XCAT phantom of an adult man (50th percentile in height and weight) with forced breathing and normal heart rate was generated. To simulate lung cancers with and without pleural invasion, 30-mm diameter tumor spheres were inserted into the right lower lung lobe of the virtual patients. Subsequently, the virtual patient was imaged using an X-ray simulator in posteroanterior and oblique directions, and bone suppression (BS) images were then created. The measurement points (tumor, rib, and diaphragm) were automatically tracked on projection images by template matching. We calculated five quantitative parameters related to the movement distance and directions of the targeted tumor and evaluated the ability of the DCR parameters to distinguish between patients with and without pleural invasion. Precise tracking of the targeted tumor was achieved on the BS images without any interruption by the rib shadows. The movement distance was an effective parameter to evaluate tumor invasion; however, with regard to the other parameters, similar results were obtained between the lung cancers with and without pleural invasion due to the lack of three-dimensional information on the projection images. The oblique views were useful for evaluation of the space between the chest wall and the moving tumor. DCR could help distinguish between patients with and without pleural invasion based on the two-dimensional movement distance in both oblique and posteroanterior projection views.
The purpose of this study was to develop a lung segmentation based on a deep learning approach for dynamic chest radiography, and to assess the clinical utility for pulmonary function assessment. Maximum inhale and exhale images were selected in dynamic chest radiographs of 214 cases, comprising 150 images during respiration. In total, 534 images (2 to 4 images per case) with annotations were prepared for this study. Three hundred images were fed into a fullyconvolutional neural network (FCNN) architecture to train a deep learning model for lung segmentation, and 234 images were used for testing. To reduce misrecognition of the lung, post processing methods on the basis of time-series information were applied to the resulting images. The change rate of the lung area was calculated throughout all frames and its clinical utility was assessed in patients with pulmonary diseases. The Sorenson-Dice coefficients between the segmentation results and the gold standard were 0.94 in inhale and 0.95 in exhale phases, respectively. There were some false recognitions (214/234), but 163 were eliminated by our post processing. The measurement of the lung area and its respiratory change were useful for the evaluation of lung conditions; prolonged expiration in obstructive pulmonary diseases could be detected as a reduced change rate of the lung area in the exhale phase. Semantic segmentation deep learning approach allows for the sequential lung segmentation of dynamic chest radiographs with high accuracy (94%) and is useful for the evaluation of pulmonary function.