This paper presents a visualization method of intestine (the small and large intestine) regions and their stenosed parts caused by ileus from CT volumes. Since it is difficult for non-expert clinicians to find stenosed parts, the intestine and its stenosed parts should be visualized intuitively. Furthermore, the intestine regions of ileus cases are quite hard to be segmented. The proposed method segments intestine regions by 3D FCN (3D U-Net). Intestine regions are quite difficult to be segmented in ileus cases since the inside the intestine is filled with liquids. These liquids have similar intensities with intestinal wall on 3D CT volumes. We segment the intestine regions by using 3D U-Net trained by a weak annotation approach. Weak-annotation makes possible to train the 3D U-Net with small manually-traced label images of the intestine. This avoids us to prepare many annotation labels of the intestine that has long and winding shape. Each intestine segment is volume-rendered and colored based on the distance from its endpoint in volume rendering. Stenosed parts (disjoint points of an intestine segment) can be easily identified on such visualization. In the experiments, we showed that stenosed parts were intuitively visualized as endpoints of segmented regions, which are colored by red or blue.
Proc. SPIE. 11314, Medical Imaging 2020: Computer-Aided Diagnosis
KEYWORDS: Visual analytics, Data modeling, Visualization, Medical research, Feature extraction, Image filtering, Image classification, Convolution, Solid modeling, RGB color model
Purpose of this paper is to present a method for visualising decision-reasoning regions in computer-aided pathological pattern diagnosis of endocytoscopic images. Endocytoscope enables us to perform direct observation of cells and their nuclei on the colon wall at maximum 500-times ultramagnification. For this new modality, computer-aided pathological diagnosis system is strongly required for the support of non-expert physicians. To develop a CAD system, we adopt convolutional neural network (CNN) as the classifier of endocytoscopic images. In addition to this classification function, based on CNN weights analysis, we develop a filter function that visualises decision-reasoning regions on classified images. This visualisation function helps novice endocytoscopists to develop their understanding of pathological pattern on endocytoscopic images for accurate endocytoscopic diagnosis. In numerical experiment, our CNN model achieved 90 % classification accuracy. Furthermore, experimental results show that decision-reasoning regions suggested by our filter function contain characteristic pit patterns in real endocytoscopic diagnosis.
Micro-CT is a nondestructive scanning device that is capable of capturing three dimensional structures at _m level. With the spread of this device uses in medical fields, it is expected that this device may bring further understanding of the human anatomy by analyzing three-dimensional micro structure from volume of in vivo specimens captured by micro-CT. In the topic of micro structure analysis of lung, the methods for extracting surface structures including the interlobular septa and the visceral pleura were not commonly studied. In this paper, we introduce a method to extract sheet structure such as the interlobular septa and the visceral pleura from micro-CT volumes. The proposed method consists of two steps: Hessian analysis based method for sheet structure extraction and Radial Structure Tensor combined with roundness evaluation for hollow-tube structure extraction. We adopted the proposed method on complex phantom data and a medical lung micro-CT volume. We confirmed the extraction of the interlobular septa from medical volume from experiments.
Measurement of a polyp size is an essential task in colon cancer screening, since the polyp-size information has critical roles for decision on colonoscopy. However, an estimation of a polyp size from a single view of colonoscope without a measurement device is quite difficult even for expert physicians. To overcome this difficulty, automated size estimation techniques would be desirable for clinical scenes. This paper presents polyp-size classification method with a single colonoscopic image for colonoscopy. Our proposed method estimates depth information from a single colonoscopic image with trained model and utilises the estimated information for the classification. In our method, the model for depth information is obtained by deep learning with colonoscopic videos. Experimental results show the achievement of binary and trinary polyp-size classification with 79% and 74% accuracy from a single still image of a colonoscopic movie.
This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.