Haptic devices allow touch-based information transfer between humans and their environment. In minimally invasive surgery, a human teleoperator benefits from both visual and haptic feedback regarding the interaction forces between instruments and tissues. In this talk, I will discuss mechanisms for stable and effective haptic feedback, as well as how surgeons and autonomous systems can use visual feedback in lieu of haptic feedback. For haptic feedback, we focus on skin deformation feedback, which provides compelling information about instrument-tissue interactions with smaller actuators and larger stability margins compared to traditional kinesthetic feedback. For visual feedback, we evaluate the effect of training on human teleoperators’ ability to visually estimate forces through a telesurgical robot. In addition, we design and characterize multimodal deep learning-based methods to estimate interaction forces during tissue manipulation for both automated performance evaluation and delivery of haptics-based training stimuli. Finally, we describe the next generation of soft, flexible surgical instruments and the opportunities and challenges they present for seeing and feeling in robot-assisted surgery.
Haptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. The amount of information that can be transmitted through touch is limited in large part by the location, distribution, and sensitivity of human mechanoreceptors. Not surprisingly, many haptic devices are designed to be held or worn at the highly sensitive fingertips, yet stimulation using a device attached to the fingertips precludes natural use of the hands. Thus, we explore the design of a wide array of haptic feedback mechanisms, ranging from devices that can be actively touched by the fingertips to multi-modal haptic actuation mounted on the arm. We demonstrate how these devices are effective in virtual reality, human-machine communication, and human-human communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.