This paper describes a bronchial orifice (BO) segmentation method on real bronchoscopic video frames by using depth images. The BO is one of the anatomical characteristics in the bronchus, which is critical in clinical applications such as bronchus scene description and navigation path generation. Previous work used image appearance and the gradation of the real bronchoscopic image to segment orifice region, which behaved poorly in complex scenes including bubble or changes in illumination. To obtain a better segmentation result of BO even in the complex scenes, we propose a BO segmentation method using the distance between the bronchoscope camera and the bronchus lumen, which is represented by a depth image. Since the depth image is unavailable due to devices limitation, we use an image-to-image domain translation network named cycle generative adversarial network (CycleGAN) to estimate depth images from real bronchoscopic images. The BO regions are considered as the regions whose distances are larger than a distance threshold. We decide the distance threshold according to the depth images' projection profiles. Experimental results showed that the proposed method can find BO regions in the real bronchoscopic videos in real-time. We manually labeled BO regions as ground truth to evaluate the proposed method. The average Dice score of the proposed method was 77.0 %.
We present an improved patient-specific bronchoscopic navigation scheme by using visual SLAM for bronchoscope tracking. Bronchoscopic navigation system is used to assist physicians during the bronchoscopy examination. Conventional navigation system obtain the camera pose of bronchoscope based on image similarity of real bron- choscopic (RB) and virtual bronchoscopic (VB) images or the pose information from the additional sensor. We propose to use visual SLAM for bronchoscope tracking. The tracking procedure of visual SLAM is improved for processing bronchoscopic scene by considering the inter-frame displacement to filter 2D-3D matches used for pose optimization. The tracking result is registered to CT images to find the relationship between RB and CT the coordinate system. Virtual bronchoscopic views are generated corresponding to real bronchoscopic views by using the registration result and camera pose. Experimental results showed that our proposed method track more frames with higher accuracy on average than the the previous method. The virtual bronchoscopic views have high similarity with real bronchoscopic views.
In this paper, we describe an automated hand eye calibration in laparoscope holding robot for robot assisted surgery. In minimally invasive surgery, laparoscope holding robot can give more stability of the laparoscope images than human laparoscope assistants. We study on laparoscope holding robot controlled based on anatomical structure information during laparoscopic surgery. In order to operate laparoscope holding robot guided by images, it is necessary to make a vision system for laparoscope holding robot. We compute the position and orientation relationships between a laparoscope camera and a Tool Center Point (TCP) of robot arm to make a vision system. We utilize Tsai’s method for hand eye calibration to estimate the homogeneous transformation matrix between the TCP and laparoscope camera. We attached a laparoscope to an industrial robot arm. The robot arm is moved to different positions and captures calibration board images. Hand eye calibration is performed using recorded TCP positions and calibration board images. The homogeneous transformation matrices between the laparoscope camera coordinate and the laparoscope holding robot TCP coordinate is obtained by this hand eye calibration. The experimental result shown that the proposed method could compute the homogeneous transformation matrix between a laparoscope holding robot TCP and a laparoscope camera.
We present a new scheme for bronchoscopic navigation by exploiting visual SLAM for bronchoscope tracking. Bronchoscopic navigation system is used to guide physicians by providing 3D space information about the bronchoscope during bronchoscopic examination. Existing bronchoscopic navigation systems mainly used CT-video or sensor for bronchoscope tracking. CT-video based tracking estimates the bronchoscope pose by registration of real bronchoscope images and virtual images generated from computed tomography (CT) images, which requires lots of time. Sensor based tracking calculates the bronchoscope pose based on information from sensor, which is easily in uenced by examination tools. We improve the bronchoscope tracking by using visual simultaneous localization and mapping (VSLAM), which can overcome the aforementioned shortcomings. VSLAM is an approach to estimate the camera pose and reconstruct surrounding structure around a camera (called map). We use the adjacent frames to increase the points used for tracking, and use VSLAM for bronchoscope tracking. Tracking performance of VSLAM were evaluated with phantom and in-vivo videos. Reconstruction performance of VSLAM was evaluated by root mean square (RMS) value, which is calculated using aligned reconstructed points and segmented bronchus from pre-operative CT volumes. Experimental results showed that the successfully tracked frames in the proposed method increased more than 700 frames compared with the original ORB-SLAM for six cases. The average RMS in phantom case between estimated bronchus from SLAM and bronchus shape from segmented bronchus was 2.55 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.