The evaluation and trial of computer-assisted surgery systems is an important part of the development process. Since human and animal trials are difficult to perform and have a high ethical value artificial organs and phantoms have become a key component for testing clinical systems. For soft-tissue phantoms like the liver it is important to match its biomechanical properties as close as possible. Organ phantoms are often created from silicone that is shaped in casting molds. Silicone is relatively cheap and the method doesn’t rely on expensive equipment. One big disadvantage of silicone phantoms is their high rigidity. To this end, we propose a new method for the generation of silicon phantoms with a softer and mechanically more accurate structure. Since we can’t change the rigidity of silicone we developed a new and easy method to weaken the structure of the silicone phantom. The key component is the misappropriation of water-soluble support material from 3D FDM-printing. We designed casting molds with an internal grid structure to reduce the rigidity of the structure. The molds are printed with an FDM (Fused Deposition Modeling) printer and entirely from water-soluble PVA (Polyvinyl Alcohol) material. After the silicone is hardened, the mold with the internal structure can be dissolved in water. The silicone phantom is then pervaded with a grid of cavities. Our experiments have shown that we can control the rigidity of the model up to a 70% reduction of its original value. The rigidity of our silicon models is simply controlled with the size of the internal grid structure.
Providing the surgeon with the right assistance at the right time during minimally-invasive surgery requires computer-assisted surgery systems to perceive and understand the current surgical scene. This can be achieved by analyzing the endoscopic image stream. However, endoscopic images often contain artifacts, such as specular highlights, which can hinder further processing steps, e.g., stereo reconstruction, image segmentation, and visual instrument tracking. Hence, correcting them is a necessary preprocessing step. In this paper, we propose a machine learning approach for automatic specular highlight removal from a single endoscopic image. We train a residual convolutional neural network (CNN) to localize and remove specular highlights in endoscopic images using weakly labeled data. The labels merely indicate whether an image does or does not contain a specular highlight. To train the CNN, we employ a generative adversarial network (GAN), which introduces an adversary to judge the performance of the CNN during training. We extend this approach by (1) adding a self-regularization loss to reduce image modification in non-specular areas and by (2) including a further network to automatically generate paired training data from which the CNN can learn. A comparative evaluation shows that our approach outperforms model-based methods for specular highlight removal in endoscopic images.
Proc. SPIE. 9784, Medical Imaging 2016: Image Processing
KEYWORDS: Endoscopy, 3D acquisition, Data modeling, Surgery, Imaging systems, Cameras, 3D modeling, Image registration, Endoscopes, Personal digital assistants, Stereoscopic cameras, Augmented reality, Filtering (signal processing)
The number of minimally invasive procedures is growing every year. These procedures are highly complex and very demanding for the surgeons. It is therefore important to provide intraoperative assistance to alleviate these difficulties. For most computer-assistance systems, like visualizing target structures with augmented reality, a registration step is required to map preoperative data (e.g. CT images) to the ongoing intraoperative scene. Without additional hardware, the (stereo-) endoscope is the prime intraoperative data source and with it, stereo reconstruction methods can be used to obtain 3D models from target structures. To link reconstructed parts from different frames (mosaicking), the endoscope movement has to be known. In this paper, we present a camera tracking method that uses dense depth and feature registration which are combined with a Kalman Filter scheme. It provides a robust position estimation that shows promising results in ex vivo and in silico experiments.
Minimally-invasive interventions offers multiple benefits for patients, but also entails drawbacks for the surgeon. The goal of context-aware assistance systems is to alleviate some of these difficulties. Localizing and identifying anatomical structures, maligned tissue and surgical instruments through endoscopic image analysis is paramount for an assistance system, making online measurements and augmented reality visualizations possible. Furthermore, such information can be used to assess the progress of an intervention, hereby allowing for a context-aware assistance. In this work, we present an approach for such an analysis. First, a given laparoscopic image is divided into groups of connected pixels, so-called superpixels, using the SEEDS algorithm. The content of a given superpixel is then described using information regarding its color and texture. Using a Random Forest classifier, we determine the class label of each superpixel. We evaluated our approach on a publicly available dataset for laparoscopic instrument detection and achieved a DICE score of 0.69.
The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention, e.g., using augmented reality. To display preoperative data, soft tissue deformations that occur during surgery have to be taken into consideration. Laparoscopic sensors, such as stereo endoscopes, can be used to create a three-dimensional reconstruction of stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just one frame, in general, will not provide enough detail to register preoperative data, since every frame only contains a part of an organ surface. A correct assignment to the preoperative model is possible only if the patch geometry can be unambiguously matched to a part of the preoperative surface. We propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. Using graphics processing unit-based methods, we achieved four frames per second. We evaluated the system with in silico, phantom, ex vivo, and in vivo (porcine) data, using different methods for estimating the camera pose (optical tracking, iterative closest point, and a combination). The results indicate that the proposed method is promising for on-the-fly organ reconstruction and registration.
The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention using augmented reality (AR). To display preoperative data correctly, soft tissue deformations that occur during surgery have to be taken into consideration. Optical laparoscopic sensors, such as stereo endoscopes, can produce a 3D reconstruction of single stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just a single frame in general will not provide enough detail to register and update preoperative data due to ambiguities. In this paper, we propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. By using GPU-based methods we achieve near real-time performance. We evaluated the system on an ex-vivo porcine liver (4.21mm± 0.63) and on two synthetic silicone livers (3.64mm ± 0.31 and 1.89mm ± 0.19) using three different methods for estimating the camera pose (no tracking, optical tracking and a combination).
One of the most complex and difficult tasks for surgeons during minimally invasive interventions is suturing. A prerequisite to assist the suturing process is the tracking of the needle. The endoscopic images provide a rich source of information which can be used for needle tracking. In this paper, we present an image-based method for markerless needle tracking. The method uses a color-based and geometry-based segmentation to detect the needle. Once an initial needle detection is obtained, a region of interest enclosing the extracted needle contour is passed on to a reduced segmentation. It is evaluated with in vivo images from da Vinci interventions.
In diagnostics and therapy control of cardiovascular diseases, detailed knowledge about the patient-specific behavior of blood flow and pressure can be essential. The only method capable of measuring complete time-resolved three-dimensional vector fields of the blood flow velocities is velocity-encoded magnetic resonance imaging (MRI), often denoted as 4D flow MRI. Furthermore, relative pressure maps can be computed from this data source, as presented by different groups in recent years. Hence, analysis of blood flow and pressure using 4D flow MRI can be a valuable technique in management of cardiovascular diseases. In order to perform these tasks, all necessary steps in the corresponding process chain can be carried out in our in-house developed software framework MEDIFRAME. In this article, we apply MEDIFRAME for a study of hemodynamics in the pulmonary arteries of five healthy volunteers. The study included measuring vector fields of blood flow velocities by phase-contrast MRI and subsequently computing relative blood pressure maps. We visualized blood flow by streamline depictions and computed characteristic values for the left and the right pulmonary artery (LPA and RPA). In all volunteers, we observed a lower amount of blood flow in the LPA compared to the RPA. Furthermore, we visualized blood pressure maps using volume rendering and generated graphs of pressure differences between the LPA, the RPA and the main pulmonary artery. In most volunteers, blood pressure was increased near to the bifurcation and in the proximal LPA, leading to higher average pressure values in the LPA compared to the RPA.
Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.
Minimally invasive surgery is a highly complex medical discipline with several difficulties for the surgeon. To alleviate these difficulties, augmented reality can be used for intraoperative assistance. For visualization, the endoscope pose must be known which can be acquired with a SLAM (Simultaneous Localization and Mapping) approach using the endoscopic images. In this paper we focus on feature tracking for SLAM in minimally invasive surgery. Robust feature tracking and minimization of false correspondences is crucial for localizing the endoscope. As sensory input we use a stereo endoscope and evaluate different feature types in a developed SLAM framework. The accuracy of the endoscope pose estimation is validated with synthetic and ex vivo data. Furthermore we test the approach with in vivo image sequences from da Vinci interventions.
In order to provide real-time intraoperative guidance, computer assisted surgery (CAS) systems often rely on
computationally expensive algorithms. The real-time constraint is especially challenging if several components such as
intraoperative image processing, soft tissue registration or context aware visualization are combined in a single system.
In this paper, we present a lightweight approach to distribute the workload over several workstations based on the
OpenIGTLink protocol. We use XML-based message passing for remote procedure calls and native types for transferring
data such as images, meshes or point coordinates. Two different, but typical scenarios are considered in order to evaluate
the performance of the new system. First, we analyze a real-time soft tissue registration algorithm based on a finite
element (FE) model. Here, we use the proposed approach to distribute the computational workload between a primary
workstation that handles sensor data processing and visualization and a dedicated workstation that runs the real-time FE
algorithm. We show that the additional overhead that is introduced by the technique is small compared to the total
execution time. Furthermore, the approach is used to speed up a context aware augmented reality based navigation
system for dental implant surgery. In this scenario, the additional delay for running the computationally expensive
reasoning server on a separate workstation is less than a millisecond. The results show that the presented approach is a
promising strategy to speed up real-time CAS systems.
Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the
surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a
customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention
due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the
preoperative model. Haptic information which could complement the visual sensor data is still not established. In
addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by
We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone
liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a
finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking
system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this
setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One
emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also
present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of
force measurements, surface reconstruction and biomechanical modeling.
Minimally invasive surgery is a medically complex discipline that can heavily benefit from computer assistance. One
way to assist the surgeon is to blend in useful information about the intervention into the surgical view using Augmented
Reality. This information can be obtained during preoperative planning and integrated into a patient-tailored model of
the intervention. Due to soft tissue deformation, intraoperative sensor data such as endoscopic images has to be acquired
and non-rigidly registered with the preoperative model to adapt it to local changes.
Here, we focus on a procedure that reconstructs the organ surface from stereo endoscopic images with millimeter
accuracy in real-time. It deals with stereo camera calibration, pixel-based correspondence analysis, 3D reconstruction
and point cloud meshing. Accuracy, robustness and speed are evaluated with images from a test setting as well as
intraoperative images. We also present a workflow where the reconstructed surface model is registered with a
preoperative model using an optical tracking system. As preliminary result, we show an initial overlay between an
intraoperative and a preoperative surface model that leads to a successful rigid registration between these two models.
One of the main challenges related to computer-assisted laparoscopic surgery is the accurate registration of
pre-operative planning images with patient's anatomy. One popular approach for achieving this involves intraoperative
3D reconstruction of the target organ's surface with methods based on multiple view geometry. The
latter, however, require robust and fast algorithms for establishing correspondences between multiple images of
the same scene. Recently, the first endoscope based on Time-of-Flight (ToF) camera technique was introduced.
It generates dense range images with high update rates by continuously measuring the run-time of intensity
modulated light. While this approach yielded promising results in initial experiments, the endoscopic ToF
camera has not yet been evaluated in the context of related work. The aim of this paper was therefore to
compare its performance with different state-of-the-art surface reconstruction methods on identical objects. For
this purpose, surface data from a set of porcine organs as well as organ phantoms was acquired with four
different cameras: a novel Time-of-Flight (ToF) endoscope, a standard ToF camera, a stereoscope, and a High
Definition Television (HDTV) endoscope. The resulting reconstructed partial organ surfaces were then compared
to corresponding ground truth shapes extracted from computed tomography (CT) data using a set of local and
global distance metrics. The evaluation suggests that the ToF technique has high potential as means for intraoperative
endoscopic surface registration.