This paper newly proposes a segmentation method of infected area for COVID-19 (Coronavirus Disease 2019) infected lung clinical CT volumes. COVID-19 spread globally from 2019 to 2020, causing the world to face a globally health crisis. It is desired to estimate severity of COVID-19, based on observing the infected area segmented from clinical computed tomography (CT) volume of COVID-19 patients. Given the lung field from a COVID-19 lung clinical CT volume as input, we desire an automated approach that could perform segmentation of infected area. Since labeling infected area for supervised segmentation needs a lot of labor, we propose a segmentation method without labeling of infected area. Our method refers to a baseline method utilizing representation learning and clustering. However, the baseline method is likely to segment anatomical structures with high H.U. (Houns field) intensity such as blood vessel into infected area. Aiming to solve this problem, we propose a novel pre-processing method that could transform high intensity anatomical structures into low intensity structures. This pre-processing method avoids high intensity anatomical structures to be mis-segmented into infected area. Given the lung field extracted from a CT volume, our method segment the lung field into normal tissue, ground GGO (ground glass opacity), and consolidation. Our method consists of three steps: 1) pulmonary blood vessel segmentation, 2) image inpainting of pulmonary blood vessel based on blood vessel segmentation result, and 3) segmentation of infected area. Compared to the baseline method, experimental results showed that our method contributes to the segmentation accuracy, especially on tubular structures such as blood vessels. Our method improved normalized mutual information score from 0.280 (the baseline method) to 0.394.
This paper newly introduces multi-modality loss function for GAN-based super-resolution that can maintain image structure and intensity on unpaired training dataset of clinical CT and micro CT volumes. Precise non- invasive diagnosis of lung cancer mainly utilizes 3D multidetector computed-tomography (CT) data. On the other hand, we can take μCT images of resected lung specimen in 50 μm or higher resolution. However, μCT scanning cannot be applied to living human imaging. For obtaining highly detailed information such as cancer invasion area from pre-operative clinical CT volumes of lung cancer patients, super-resolution (SR) of clinical CT volumes to μCT level might be one of substitutive solutions. While most SR methods require paired low- and high-resolution images for training, it is infeasible to obtain precisely paired clinical CT and μCT volumes. We aim to propose unpaired SR approaches for clincial CT using micro CT images based on unpaired image translation methods such as CycleGAN or UNIT. Since clinical CT and μCT are very different in structure and intensity, direct appliation of GAN-based unpaired image translation methods in super-resolution tends to generate arbitrary images. Aiming to solve this problem, we propose new loss function called multi-modality loss function to maintain the similarity of input images and corresponding output images in super-resolution task. Experimental results demonstrated that the newly proposed loss function made CycleGAN and UNIT to successfully perform SR of clinical CT images of lung cancer patients into μCT level resolution, while original CycleGAN and UNIT failed in super-resolution.