Cone-Beam Computed Tomography (CBCT) usually suffers from motion blurring artifacts when scanning at the region of the thorax. Consequently, it may result in inaccuracy in localizing the target of treatment and verifying the delivered dose in radiation therapy. Despite that 4D-CBCT reconstruction technology could alleviate the motion blurring artifacts, it introduces severe streaking artifacts due to the under-sampled projections used for reconstruction. Aiming at improving the overall quality of 4D-CBCT images, we explored the performance of the deep learning-based technique on 4D-CBCT images. Inspired by the high correlation among these 4D-CBCT images, we proposed a spatial-temporal plus prior image-based CNN, which is a cascaded network of a spatial CNN and a temporal CNN. For the spatial CNN, it is in the manner of the encoder-decoder architecture that utilizes a pair of a prior image-based channel and an artifact-degraded channel in the encoder stage for feature representation and fuses the feature maps for image restoration in the decoder stage. Next, three consecutive phases of images that are predicted by N-net individually are stacked together for latent image restoration via the temporal CNN. By doing so, temporal CNN learns the correlation between these images and construct the residual map covering streaking artifacts and noise for further artifact reduction. Experimental results of both simulation data and patient data indicated that the proposed method has the capability not only reduces the streaking artifacts but also restore the original anatomic features while avoiding inducing error tomographic information.
|