Proc. SPIE. 11314, Medical Imaging 2020: Computer-Aided Diagnosis
KEYWORDS: Visual analytics, Data modeling, Visualization, Medical research, Feature extraction, Image filtering, Image classification, Convolution, Solid modeling, RGB color model
Purpose of this paper is to present a method for visualising decision-reasoning regions in computer-aided pathological pattern diagnosis of endocytoscopic images. Endocytoscope enables us to perform direct observation of cells and their nuclei on the colon wall at maximum 500-times ultramagnification. For this new modality, computer-aided pathological diagnosis system is strongly required for the support of non-expert physicians. To develop a CAD system, we adopt convolutional neural network (CNN) as the classifier of endocytoscopic images. In addition to this classification function, based on CNN weights analysis, we develop a filter function that visualises decision-reasoning regions on classified images. This visualisation function helps novice endocytoscopists to develop their understanding of pathological pattern on endocytoscopic images for accurate endocytoscopic diagnosis. In numerical experiment, our CNN model achieved 90 % classification accuracy. Furthermore, experimental results show that decision-reasoning regions suggested by our filter function contain characteristic pit patterns in real endocytoscopic diagnosis.
Measurement of a polyp size is an essential task in colon cancer screening, since the polyp-size information has critical roles for decision on colonoscopy. However, an estimation of a polyp size from a single view of colonoscope without a measurement device is quite difficult even for expert physicians. To overcome this difficulty, automated size estimation techniques would be desirable for clinical scenes. This paper presents polyp-size classification method with a single colonoscopic image for colonoscopy. Our proposed method estimates depth information from a single colonoscopic image with trained model and utilises the estimated information for the classification. In our method, the model for depth information is obtained by deep learning with colonoscopic videos. Experimental results show the achievement of binary and trinary polyp-size classification with 79% and 74% accuracy from a single still image of a colonoscopic movie.
This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.