As surgical robotics are made progressively smaller, and their actuation systems simplified, the opportunity arises to re-evaluate how we integrate them into operating room workflows. Over the past few years, several research groups have shown that robots can be made so small and light that they can become hand-held tools, in contrast to the prevailing commercial paradigm of surgical robots being large multi-arm floor-mounted systems that must be remotely teleoperated. This hand-held paradigm enables robots to fit much more seamlessly into existing clinical workflows, and as such, these new robots need to be paired with similarly compact user interfaces. It also gives rise to a new area of user interface research, exploring how the surgeon can simultaneously control the position and orientation of the overall system, while also simultaneously controlling small robotic manipulators that maneuver dexterously at the tip. In this paper, we compare an onboard user interface mounted directly to the robotic platform against the traditional offboard user interface positioned away from the robot. In the latter, the surgeon positions the robot, and a support arm holds it in place while the surgeon operates the manipulators using the offboard surgeon console. The surgeon can move back and forth between the robot and the console as often as desired. Three experiments were conducted, and results show that the onboard interface enables statistically significantly faster performance in a point-touching task performed in a virtual environment.
KEYWORDS: 3D modeling, Data modeling, 3D image reconstruction, Surgery, Imaging systems, 3D image processing, Luminescence, Image segmentation, Endoscopes, Kidney, Robotic surgery
Over the past several years, researchers have made significant progress toward providing image guidance for the da Vinci system, utilizing data sources such as robot kinematic data, endoscope image data, and preoperative medical images. One data source that could provide additional subsurface information for use in image guidance is the da Vinci’s FireFly camera system. FireFly is a fluorescence imaging feature for the da Vinci system that uses injected indocyanine green dye and special endoscope filters to illuminate subsurface anatomical features as the surgeon operates. FireFly is now standard for many surgical procedures with the da Vinci robot; however, it is currently challenging to understand spatial relationships between pre-operative CT images and intraoperative fluorescence images. Here, we extend our image guidance system to incorporate FireFly information as well, so that the surgeon can view FireFly data in the image guidance display while operating with the da Vinci robot. We present a method for reconstructing 3D models of the FireFly fluorescence data from endoscope images and mapping the models into an image guidance display that also includes segmented, registered preoperative CT images. We analyze the accuracy of our reconstruction and mapping method and present a proof-of-concept application where we reconstruct a fluorescent subsurface blood vessel and map it into our image guidance display. Our method could be used to provide surgeons with additional context for the FireFly fluorescence imaging data or to provide additional data for computing or verifying the registration between the robot and preoperative images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.