Modern laparoscopes are equipped with visible-light optical cameras that assist surgeons navigate human anatomy. However, as surgical procedures require precision, surgeons would benefit from auxiliary imaging technologies to reliably perform operations. To actualize this improvement, two cameras [near-infrared (NIR) and red-green-blue (RGB)] can be integrated into one housing module while maintaining centerpoint alignment and optimal image focus. We have designed a prototype that satisfies these requirements and features cameras that can be individually, translationally adjusted in the x, y, and z-directions. Tri-directional translation and tilt-angle fine-tuning allow the cameras to conform to the lens focal distance and ensure they capture the same visual field. To demonstrate the usefulness of this housing design, we characterize the specifications of optical alignment, field-of-views, and depth-of-focus and describe a custom fabricated snapshot imager for associated medical applications in real-time, intraoperative settings. Detailed info: The housing module consists of a casing module for each camera and a central cube that serves as an interface between the light-collection optics at the front of the cube and the two optical cameras. A dichroic filter, at 45-degrees, positioned within the cube transmits near-infrared wavelengths to the NIR camera at the back and reflects visible light to the RGB camera on the bottom. To improve image focus, the casing modules can be adjusted to move in and out of the cube and fine-tuned by varying the relative mounting screw tensions. Slots and spacers allow for calibration between the cameras and ensure they have the same centerpoint.
During thyroid surgery, parathyroid glands may be accidentally extracted due to their similar shapes and colors to the surrounding tissues (lymph nodes, fat, and thyroid tissue). In order to avoid damaging or resecting vulnerable glands, we aim to assist surgeons to better identify the parathyroids with real-time bounding boxes on a screen available in operating rooms. Parathyroids are auto fluorescent when excited with near infrared (NIR) light; therefore, videos recorded simultaneously in NIR, and RGB color formats can be used to train a deep learning model for robust object detection and localization without the need for expert annotation. The use of NIR images facilitates the generation of the ground truth dataset. We collected 16 patients' videos during total thyroidectomy. The videos were initially decomposed into a series of images taken at every 10 frames. From this, an intensity threshold was applied on the NIR images creating newer images where the parathyroid can be easily selected. Using these images, ground truth bounding boxes were generated. Our ground truth database size was over 600 images, of which 540 images contained parathyroid glands and 66 did not. We ran Faster R-CNN twice, initially to perform localization using the images with parathyroids only. The second method was to perform classification using the entire dataset. For the first method, we achieved an average intersection over union of 85% and for second, we obtained a precision of 98% and a recall of 100%. Given the limited dataset we are very excited with these results.
Many liver resection surgeries are performed with fluorescence-guided approaches using near-infrared (NIR) fluorescent dyes, such as indocyanine green (ICG), in an effort to prevent postoperative bile leakage. However, these existing fluorophores have limitations when assessing bile leakages and other bile duct injuries. We have reported a novel NIR fluorescent dye, BL-760, that we believe would be favorable over ICG and other NIR dyes during hepatectomies. We will describe our hepatic resection surgery study conducted with a swine model while comparing the advantages and disadvantages of common fluorescent dyes and the novel BL-760 dye.
PurposeIntraoperative evaluation of bowel perfusion is currently dependent upon subjective assessment. Thus, quantitative and objective methods of bowel viability in intestinal anastomosis are scarce. To address this clinical need, a conditional adversarial network is used to analyze the data from laser speckle contrast imaging (LSCI) paired with a visible-light camera to identify abnormal tissue perfusion regions.ApproachOur vision platform was based on a dual-modality bench-top imaging system with red-green-blue (RGB) and dye-free LSCI channels. Swine model studies were conducted to collect data on bowel mesenteric vascular structures with normal/abnormal microvascular perfusion to construct the control or experimental group. Subsequently, a deep-learning model based on a conditional generative adversarial network (cGAN) was utilized to perform dual-modality image alignment and learn the distribution of normal datasets for training. Thereafter, abnormal datasets were fed into the predictive model for testing. Ischemic bowel regions could be detected by monitoring the erroneous reconstruction from the latent space. The main advantage is that it is unsupervised and does not require subjective manual annotations. Compared with the conventional qualitative LSCI technique, it provides well-defined segmentation results for different levels of ischemia.ResultsWe demonstrated that our model could accurately segment the ischemic intestine images, with a Dice coefficient and accuracy of 90.77% and 93.06%, respectively, in 2560 RGB/LSCI image pairs. The ground truth was labeled by multiple and independent estimations, combining the surgeons’ annotations with fastest gradient descent in suspicious areas of vascular images. The total processing time was 0.05 s for an image size of 256 × 256.ConclusionsThe proposed cGAN can provide pixel-wise and dye-free quantitative analysis of intestinal perfusion, which is an ideal supplement to the traditional LSCI technique. It has potential to help surgeons increase the accuracy of intraoperative diagnosis and improve clinical outcomes of mesenteric ischemia and other gastrointestinal surgeries.
Mesenteric ischemia or infraction involves a wide spectrum of disease and is known as complex disorder with high mortality rate. The bowel ischemia is caused by insufficient blood flow to the intestine and surgical intervention is the definitive treatment to remove non-viable tissues and restore blood flow to viable tissues. Current clinical practice primarily relies on individual surgeon’s visual inspection and clinical experience that can be subjective and unreproducible. Therefore, more consistent and objective method is required to improve the surgical performance and clinical outcomes. In this work, we present a new optical method combined with unsupervised learning using conditional variational encoders to enable quantitative and objective assessment of tissue perfusion. We integrated multimodal optical imaging technologies of color RGB and non-invasive dye-free laser speckle contrast imaging (LSCI) into a handheld device, observed normal small bowel tissues to train generative autoencoder deep neural network pipeline, and finally tested small bowel ischemia detection through preclinical rodent studies.
KEYWORDS: RGB color model, Data modeling, Performance modeling, Surgery, Near infrared, Computer-aided diagnosis, Computer aided diagnosis and therapy, Imaging systems, Visual process modeling
Parathyroid glands (PGs), small endocrine glands in the neck, control calcium levels in the body and are crucial to maintaining homeostasis. Accidental removal of or direct damage to healthy parathyroid glands during thyroid surgery may occur due to its small size and similar appearance to surrounding anatomical structures, potentially leading to postoperative hypocalcemia. Thus precise and quick detection of normal parathyroid glands in real-time during surgery can improve the surgical outcome. In this study, we introduce a deep learning system (YOLOv5) based on dual RGB/NIR imaging for Computer-aided detection (CADe) of PG with high accuracy. This model can effectively detect parathyroid glands in real-time as it also includes the confidence level, which can help surgeons make decisions. We tested a computer-aided detection (CADe) using the co-registered RGB/NIR camera and ex-vivo thyroid tissue specimen. The average precisions of models were significantly higher when trained by the dual-RGB/NIR (0.99) data than NIR (0.94) and RGB (0.96) data alone at a high confidence threshold (0.7). The proposed CADe may increase the parathyroid detection rates clinically.
In thyroid surgeries, it is often difficult to visually distinguish parathyroid glands (PTGs) from their surrounding anatomical structures such as lymph nodes, fat, and thyroid tissues. There is a clear need to provide head and neck surgeons with intraoperative surgical guidance to safely distinguish PTGs and assess its viability to prevent the risk of hypocalcemia. This study aims to develop a portable hand-held imager that eliminates the need for complex set up for intraoperative imaging to increase the efficiency and performance for surgeons during thyroid surgeries. The performance of the device prototype was evaluated via in-vivo testing throughout preclinical studies.
The primary liver cancer including intrahepatic bile duct cancer pose significant global burden of illness
with increasing incidence and mortality in US and around the world. Surgery remains the most effective
form of treatment. However, surgical complication rates for medium to high complexity hepatectomies persist in 30-40% range even in highly skilled hands and at high volume centers. The critical challenges appear to be attributable to navigating liver parenchymal dissection, where size of resection surface, associated with blood loss and missed bile leaks from the liver parenchyma, and prolonged operative time during dissection pose significant obstacle. In this work, we present a new laparoscopic real-time liver flow display of subsurface liver structures (e.g., intrahepatic artery, portal vein, and bile duct) by creating a ‘Surgical Map’ to guide liver parenchymal dissection in hepatobiliary surgery. The intelligent display of intrahepatic critical structures and functional physiology in real-time can make the hepatic dissection safer and more efficient for any liver surgery. We integrated multimodal optical imaging technologies into a single laparoscopic vision tool, created a continuously evolving quantitative surgical map based on Bayesian framework, and finally validated the usefulness of Surgical Map through preclinical porcine studies.
Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement.
In this study, we demonstrate an automated data acquisition/analysis platform for both long-term motion tracking and
functional brain imaging in freely moving mice. Our system utilizes a fiber-bundle based fluorescence microscope for 24
hours imaging of cellular activities within the brain while also monitoring corresponding animal behaviors using a NIR
camera. Synchronized software and automation of analysis allow quantification of all animal behaviors and their brain
activities over extended periods of time. Our platform can be used for interrogation of the brain activities in different
behavioral states and is also well-suited for longitudinal studies of cellular activities in freely moving animals.
Fiber-optic based optical imaging is an emerging technique for studying brain activity in live animals. Here, we
introduce a novel fluorescence fiber-optic microendoscopy approach to minimal invasively detect neural activities in a
live mouse brain . The system uses a flexible endoscopic probe composed of a multi-core coherent fiber-bundle
terminated with an approximately 1500-micron working distance objective lens. The fiber-optic neural interface is
mounted on a 4-mm2 cranial window enabling visualization of glial calcium transients from the same brain region for weeks. We evaluated the system performance through in vivo imaging of GCaMP3 fluorescence in transgenic headrestrained
mice during locomotion.
A novel imaging system that recommends potential suture placement for anastomosis to surgeons is developed. This is
achieved by a multispectral imaging system coupled with polarizers and image analysis software. We performed
preliminary imaging of ex vivo porcine intestine to evaluate the system. Vulnerable tissue regions including blood
vessels were successfully identified and segmented. Thickness of different tissue areas is visualized. Strategies towards
optimal points for suture placements have been discussed. Preliminary data suggest our imaging platform and analysis
algorithm may be useful in avoiding blood vessels, identifying optimal regions for suture placements to perform safer
operations in possibly reduced time.
Accurate optical characterization of different tissue types is an important tool for potentially guiding surgeons
and enabling automated robotic surgery. Multispectral imaging and analysis have been used in the literature to detect
spectral variations in tissue reflectance that may be visible to the naked eye. Using this technique, hidden structures can
be visualized and analyzed for effective tissue classification. Here, we investigated the feasibility of automated tissue
classification using multispectral tissue analysis. Broadband reflectance spectra (200-1050 nm) were collected from nine
different ex vivo porcine tissues types using an optical fiber-probe based spectrometer system. We created a
mathematical model to train and distinguish different tissue types based upon analysis of the observed spectra using total
principal component regression (TPCR). Compared to other reported methods, our technique is computationally
inexpensive and suitable for real-time implementation. Each of the 92 spectra was cross-referenced against the nine
tissue types. Preliminary results show a mean detection rate of 91.3%, with detection rates of 100% and 70.0% (inner
and outer kidney), 100% and 100% (inner and outer liver), 100% (outer stomach), and 90.9%, 100%, 70.0%, 85.7%
(four different inner stomach areas, respectively). We conclude that automated tissue differentiation using our
multispectral tissue analysis method is feasible in multiple ex vivo tissue specimens. Although measurements were
performed using ex vivo tissues, these results suggest that real-time, in vivo tissue identification during surgery may be
possible.
A micro sized implantable ball lens-based fiber optic probe design is described for continuous monitoring of brain
activity in freely behaving mice. A prototype uses a 500-micron ball lens and a highly flexible 350-micron-diameter
fiber bundle, which are enclosed by a 21G stainless steel sheath. Several types and thickness of brain tissue, consisting
of fluorescent probes such as GFP, GCaMP3 calcium indicator, are used to evaluate the performance of the imaging
probe. Measured working distance is approximately 400-μm, but is long enough to detect neural activities from cortical
and cerebellar tissues of mice brain.
Coherent fiber bundles with high core density give both flexibility and high resolution to microscopy. Despite of these
advantages, fiber bundles inevitably have uncovered region between adjacent cores. The region results in structural
artifact known as pixelation effect. Many kinds of image processing techniques have been introduced to remove this
pixelation artifact such as frequency domain filter and Gaussian filter. However, these methods fundamentally have
limitation because they use the information of adjacent pixels to make up for these uncovered area; therefore, they
cannot avoid blurring effect as a result. To overcome this problem, we introduce spatial compound imaging method to
overcome this pixelation artifact. The method uses multiple frames taken with small deviation of position. Some parts of
these images include information which is devoid of in other images. The total amount of information increase as more
images are added up and we can expect the improvement of resolution in the final images. At the same time, the
duplicated parts among these images can be averaged to improve SNR ratio. For these improvements, we essentially
need sophisticated registration algorithm. The pixelation artifact is troublesome again in registration process because its
structural artifacts are strong features shared with whole images. However, we can solve this problem by using reference
image and divide the sample images into two parts: effective and ineffective regions. We used effective regions for
registration. We used USAF target to evaluate our method and we could get a result that SNR and resolution are both
critically increase.
KEYWORDS: Imaging systems, Luminescence, Reflectivity, Signal detection, Fiber optics, Green fluorescent protein, Cancer, Optical filters, Cervical cancer, In vivo imaging
The most common optical method to validate intracellular gene delivery in cancer is to detect tagged fluorescence signals from the cells. However, fluorescent detection is usually performed in vitro due to the limitation of standard microscopes. Herein, we propose a highly sensitive dual-modality fiber-optic imager (DFOI), which enables in vivo fluorescence imaging. Our system uses a coherent fiber bundle based imager capable of simultaneously performing both confocal reflectance and fluorescent microscopy. Non-viral vectors targeting human cervical cancer cells (HeLa) were used to evaluate the performance. Preliminary results demonstrated the DFOI is promising for in vivo evaluation of intracellular gene delivery.
The authors describe the development of an ultrafast three-dimensional (3D) optical coherence tomography (OCT) imaging system that provides real-time intraoperative video images of the surgical site to assist surgeons during microsurgical procedures. This system is based on a full-range complex conjugate free Fourier-domain OCT (FD-OCT). The system was built in a CPU-GPU heterogeneous computing architecture capable of video OCT image processing. The system displays at a maximum speed of 10 volume/?startsend? for an image volume size of 160×80×1024 (X×Y×Z) pixels. We have used this system to visualize and guide two prototypical microsurgical maneuvers: microvascular anastomosis of the rat femoral artery and ultramicrovascular isolation of the retinal arterioles of the bovine retina. Our preliminary experiments using 3D-OCT-guided microvascular anastomosis showed optimal visualization of the rat femoral artery (diameter<0.8 mm), instruments, and suture material. Real-time intraoperative guidance helped facilitate precise suture placement due to optimized views of the vessel wall during anastomosis. Using the bovine retina as a model system, we have performed "ultra microvascular" feasibility studies by guiding handheld surgical micro-instruments to isolate retinal arterioles (diameter ∼ 0.1 mm). Isolation of the microvessels was confirmed by successfully passing a suture beneath the vessel in the 3D imaging environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.