Significance: Hyperspectral imaging (HSI) can support intraoperative perfusion assessment, the identification of tissue structures, and the detection of cancerous lesions. The practical use of HSI for minimal-invasive surgery is currently limited, for example, due to long acquisition times, missing video, or large set-ups.
Aim: An HSI laparoscope is described and evaluated to address the requirements for clinical use and high-resolution spectral imaging.
Approach: Reflectance measurements with reference objects and resected human tissue from 500 to 1000 nm are performed to show the consistency with an approved medical HSI device for open surgery. Varying object distances are investigated, and the signal-to-noise ratio (SNR) is determined for different light sources.
Results: The handheld design enables real-time processing and visualization of HSI data during acquisition within 4.6 s. A color video is provided simultaneously and can be augmented with spectral information from push-broom imaging. The reflectance data from the HSI system for open surgery at 50 cm and the HSI laparoscope are consistent for object distances up to 10 cm. A standard rigid laparoscope in combination with a customized LED light source resulted in a mean SNR of 30 to 43 dB (500 to 950 nm).
Conclusions: Compact and rapid HSI with a high spatial- and spectral-resolution is feasible in clinical practice. Our work may support future studies on minimally invasive HSI to reduce intra- and postoperative complications.
3D medical images are important components of modern medicine. Their usefulness for the physician depends on their quality, though. Only high-quality images allow accurate and reproducible diagnosis and appropriate support during treatment. We have analyzed 202 MRI images for brain tumor surgery in a retrospective study. Both an experienced neurosurgeon and an experienced neuroradiologist rated each available image with respect to its role in the clinical workflow, its suitability for this specific role, various image quality characteristics, and imaging artifacts. Our results show that MRI data acquired for brain tumor surgery does not always fulfill the required quality standards and that there is a significant disagreement between the surgeon and the radiologist, with the surgeon being more critical. Noise, resolution, as well as the coverage of anatomical structures were the most important criteria for the surgeon, while the radiologist was mainly disturbed by motion artifacts.
KEYWORDS: Video, Surgery, Instrument modeling, Endoscopy, Detection and tracking algorithms, Process modeling, Signal processing, Cameras, Data acquisition, Algorithm development
A robust identification of the instrument currently used by the surgeon is crucial for the automatic modeling and analysis of surgical procedures. Various approaches for intra-operative surgical instrument identification have been presented, mostly based on radio-frequency identification (RFID) or endoscopic video analysis. A novel approach is to identify the instruments on the instrument table of the scrub nurse with a combination of video and weight information. In a previous article, we successfully followed this approach and applied it to multiple instances of an ear, nose and throat (ENT) procedure and the surgical tray used therein. In this article, we present a metric for the suitability of the instruments of a surgical tray for identification by video and weight analysis and apply it to twelve trays of four different surgical domains (abdominal surgery, neurosurgery, orthopedics and urology). The used trays were digitized at the central sterile services department of the hospital. The results illustrate that surgical trays differ in their suitability for the approach. In general, additional weight information can significantly contribute to the successful identification of surgical instruments. Additionally, for ten different surgical instruments, ten exemplars of each instrument were tested for their weight differences. The samples indicate high weight variability in instruments with identical brand and model number. The results present a new metric for approaches aiming towards intra-operative surgical instrument detection and imply consequences for algorithms exploiting video and weight information for identification purposes.
The treatment process of tumor patients is supported by different stand-alone ePR and clinical decision support (CDS) systems. We developed a concept for the integration of a specialized ePR for head and neck tumor treatment and a DICOM-RT based CDS system for radiation therapy in order to improve the clinical workflow and therapy outcome. A communication interface for the exchange of information that is only available in the respective other system will be realized. This information can then be used for further assistance and clinical decision support functions. In the first specific scenario radiation therapy related information such as radiation dose or tumor size are transmitted from the CDS to the ePR to extend the information base. This information can then be used for the automatic creation of clinical documents or retrospective clinical trial studies. The second specific use case is the transmission of follow-up information from the ePR to the CDS system. The CDS system uses the current patient’s anatomy and planned radiation dose distribution for the selection of other patients that already received radiation therapy. Afterwards, the patients are grouped according to the therapy outcome so that the physician can compare radiation parameters and therapy results for choosing the best possible therapy for the patient. In conclusion this research project shows that centralized information availability in tumor therapy is important for the improvement of the patient treatment process and the development of sophisticated decision support functions.
For transcatheter-based minimally invasive procedures in structural heart disease ultrasound and X-ray are the two enabling imaging modalities. A live fusion of both real-time modalities can potentially improve the workflow and the catheter navigation by combining the excellent instrument imaging of X-ray with the high-quality soft tissue imaging of ultrasound. A recently published approach to fuse X-ray fluoroscopy with trans-esophageal echo (TEE) registers the ultrasound probe to X-ray images by a 2D-3D registration method which inherently provides a registration of ultrasound images to X-ray images. In this paper, we significantly accelerate the 2D-3D registration method in this context. The main novelty is to generate the projection images (DRR) of the 3D object not via volume ray-casting but instead via a fast rendering of triangular meshes. This is possible, because in the setting for TEE/X-ray fusion the 3D geometry of the ultrasound probe is known in advance and their main components can be described by triangular meshes. We show that the new approach can achieve a speedup factor up to 65 and does not affect the registration accuracy when used in conjunction with the gradient correlation similarity measure. The improvement is independent of the underlying registration optimizer. Based on the results, a TEE/X-ray fusion could be performed with a higher frame rate and a shorter time lag towards real-time registration performance. The approach could potentially accelerate other applications of 2D-3D registrations, e.g. the registration of implant models with X-ray images.
KEYWORDS: Process modeling, Scanning probe microscopy, Human-machine interfaces, Surgery, Data modeling, Systems modeling, Sensors, Interfaces, Data acquisition, Analytical research
Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures.
Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and
procedure optimization, surgical education, and workflow management scheme design.
This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We
developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to
drive the graphical user interface for the observer to restrict the search space of terminology depending on the current
situation.
In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user
interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the
use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and
temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.
Workflow analysis can be used to record the steps taken during clinical interventions with the goal of identifying
bottlenecks and streamlining the procedure efficiency. In this study, we recorded the workflow for uterine fibroid
embolization (UFE) procedures in the interventional radiology suite at Georgetown University Hospital in Washington,
DC, USA. We employed a custom client/server software architecture developed by the Innovation Center for Computer
Assisted Surgery (ICCAS) at the University of Leipzig, Germany. This software runs in a JAVA environment and
enables an observer to record the actions taken by the physician and surgical team during these interventions. The data
recorded is stored as an XML document, which can then be further processed. We recorded data from 30 patients and
found a mean intervention time of 01:49:46 (+/- 16:04) minutes. The critical intervention step, the embolization, had a
mean time of 00:15:42 (+/- 05:49) minutes, which was only 15% of the total intervention time.
The generation, storage, transfer, and representation of image data in radiology are standardized by DICOM. To cover the needs of image guided surgery or computer assisted surgery in general one needs to handle patient information besides image data. A large number of objects must be defined in DICOM to address the needs of surgery. We propose an analysis process based on Surgical Workflows that helps to identify these objects together with use cases and requirements motivating for their specification. As the first result we confirmed the need for the specification of representation and transfer of geometric models. The analysis of Surgical Workflows has shown that geometric models are widely used to represent planned procedure steps, surgical tools, anatomical structures, or prosthesis in the context of surgical planning, image guided surgery, augmented reality, and simulation. By now, the models are stored and transferred in several file formats bare of contextual information. The standardization of data types including contextual information and specifications for handling of geometric models allows a broader usage of such models. This paper explains the specification process leading to Geometry Mesh Service Object Pair classes. This process can be a template for the definition of further DICOM classes.
Workflow analysis has the potential to dramatically improve the efficiency and clinical outcomes of medical procedures.
In this study, we recorded the workflow for nerve block and facet block procedures in the interventional radiology suite
at Georgetown University Hospital in Washington, DC, USA. We employed a custom client/server software architecture
developed by the Innovation Center for Computer Assisted Surgery (ICCAS) at the University of Leipzig, Germany.
This software runs in an internet browser, and allows the user to record the actions taken by the physician during a
procedure. The data recorded during the procedure is stored as an XML document, which can then be further processed.
We have successfully gathered data on a number if cases using a tablet PC, and these preliminary results show the
feasibility of using this software in an interventional radiology setting. We are currently accruing additional cases and
when more data has been collected we will analyze the workflow of these procedures to look for inefficiencies and
potential improvements.
Surgical Workflows are used for the methodical and scientific analysis of surgical interventions. The approach described
here is a step towards developing surgical assist systems based on Surgical Workflows and integrated control systems for
the operating room of the future. This paper describes concepts and technologies for the acquisition of Surgical
Workflows by monitoring surgical interventions and their presentation. Establishing systems which support the Surgical
Workflow in operating rooms requires a multi-staged development process beginning with the description of these
workflows. A formalized description of surgical interventions is needed to create a Surgical Workflow. This description
can be used to analyze and evaluate surgical interventions in detail. We discuss the subdivision of surgical interventions
into work steps regarding different levels of granularity and propose a recording scheme for the acquisition of manual
surgical work steps from running interventions. To support the recording process during the intervention, we introduce a
new software architecture. Core of the architecture is our Surgical Workflow editor that is intended to deal with the
manifold, complex and concurrent relations during an intervention. Furthermore, a method for an automatic generation of
graphs is shown which is able to display the recorded surgical work steps of the interventions. Finally we conclude with
considerations about extensions of our recording scheme to close the gap to S-PACS systems.
The approach was used to record 83 surgical interventions from 6 intervention types from 3 different surgical disciplines:
ENT surgery, neurosurgery and interventional radiology. The interventions were recorded at the University Hospital
Leipzig, Germany and at the Georgetown University Hospital, Washington, D.C., USA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.