Open Access
20 January 2015 Image overlay solution based on threshold detection for a compact near infrared fluorescence goggle system
Author Affiliations +
Abstract
Near infrared (NIR) fluorescence imaging has shown great potential for various clinical procedures, including intraoperative image guidance. However, existing NIR fluorescence imaging systems either have a large footprint or are handheld, which limits their usage in intraoperative applications. We present a compact NIR fluorescence imaging system (NFIS) with an image overlay solution based on threshold detection, which can be easily integrated with a goggle display system for intraoperative guidance. The proposed NFIS achieves compactness, light weight, hands-free operation, high-precision superimposition, and a real-time frame rate. In addition, the miniature and ultra-lightweight light-emitting diode tracking pod is easy to incorporate with NIR fluorescence imaging. Based on experimental evaluation, the proposed NFIS solution has a lower detection limit of 25 nM of indocyanine green at 27 fps and realizes a highly precise image overlay of NIR and visible images of mice in vivo. The overlay error is limited within a 2-mm scale at a 65-cm working distance, which is highly reliable for clinical study and surgical use.

1.

Introduction

Identification and differentiation of tumor tissue from surrounding healthy tissue is a major issue and challenge during oncologic surgery today.1 The surgeons have to rely on palpation and visual information to identify tissue that needs to be resected (such as cancerous tissue and lymph nodes) and tissue that needs to be preserved (muscles, nerves, blood vessels, etc.). The low visual and structural contrast between cancerous and healthy tissues can lead to two negative scenarios after surgery. First, the patient might be diagnosed with positive margins once the tumor tissue is resected. This diagnosis indicates that not all cancerous tissue has been removed from the patient; consequently, secondary surgery will have to be performed to remove the rest of the cancerous tissue. About 20% to 70% of breast cancer patients will undergo a second surgery due to positive margins of the resected tissue.2,3 The second issue is damage to healthy tissue, such as nerves, muscles, blood vessels, and vital organs, which can lead to additional surgeries to repair the damage on healthy organs inflicted during the primary surgery. Both of these scenarios, that is, positive tissue margins and damage to healthy tissue, bring an unnecessary toll to the patient’s health and inflict financial costs.

Image-guided surgery (IGS) aims to provide structural and functional information to the surgeon in clinical settings.4 This additional information allows the surgeon to quickly and accurately identify the margins of the tumor tissue as well as the healthy tissue that should be absolutely preserved. Optical imaging techniques have played a major role in IGS. Among the various optical imaging techniques, fluorescence imaging has played a significant role in providing vital information to surgeons. Exogenous and endogenous fluorophores can help to produce a high contrast between healthy and tumor tissues once excited with the appropriate wavelength. Near-infrared (NIR) fluorescence is of particular interest in IGS because of the following advantages: low auto-fluorescence, low tissue scattering, low tissue absorption in the NIR region, and imaging wavelength outside the visible spectrum.5 The low auto-fluorescence in the NIR allows for background-free imaging, leading to high contrast between tumor and healthy tissues. The low scattering and low absorption of the NIR light allows for deeper imaging and the localization of tumor tissue in the range of several millimeters to a centimeter depending on the tissue surrounding the tumor. There are two FDA-approved NIR fluorescence dyes: methylene blue (MB) and indocyanine green (ICG),6,7 which help to identify tumor tissue due to passive accumulation. Since these fluorescent dyes emit light outside the visible spectrum, NIR fluorescence imaging does not impede or hamper the existing surgical flow or surgical light sources.

However, NIR fluorescence requires developing specialized imaging sensors since the human eye cannot perceive these wavelengths. The fluorescence imaging system should be able to capture both the visible spectrum information as well as NIR information, which are temporally and spatially coregistered. This will simultaneously provide both anatomical information (color image) as well as tumor localization (NIR image) to the surgeon for more accurate resection. Since silicon photo detectors [complementary metal oxide semiconductor (CMOS)8 and charge-coupled device (CCD)] are sensitive to both the visible and NIR regions, the same photo detectors can be used to acquire both image modalities, which can simplify the design of the entire imaging system.

NIR fluorescence imaging systems (NFISs) have witnessed rapid growth in the last decade in both research and commercial settings. NIR imaging systems, such as FLARE,9,10 SPY,11 PDE,12 and others,13 are currently being used in several clinical settings on a daily basis. These systems combine two CCD imaging sensors, one is for visible spectrum imaging and the other is for NIR imaging, with optimized dichroic beam splitters and spectral filters.5 Visual information from both imaging sensors is processed and displayed on a remote monitor, which creates discomfort to the surgeon when viewing this information and simultaneously performing resection. The footprint for these devices is typically large, and the high cost associated with these systems limits the dissemination of the systems. Minimally invasive surgical procedures, such as endoscopy14 and laparoscopy,1517 have also benefited from NIR fluorescence. Optimized NIR rigid probes have been used to simultaneously acquire fluorescence and visible spectrum information and to present this information on an overlaying monitor. These studies have been limited to animal models, with promising leads for clinical studies in the near future.

One of the main challenges for wide acceptance of these imaging systems is how information is presented to the health operator. A creative solution was proposed by Liu et al,18,19 in which a head-mounted device, such as the goggle display devices typically used in the gaming industry, was complemented with camera devices mounted on the physician’s head to acquire both NIR and visible information. The camera utilized time-multiplexed spectral filters to acquire NIR and visible spectrum images at different times, which caused misregistration between the visual information from both spectrums due to movements of either the surgeon or patient. The large weight of the entire system also caused severe discomfort to the surgeon. Another imaging system was proposed by Shao et al.,20 in which multiple cameras placed in the operating room were used to create a single synthetic image, combining both NIR and visible spectrum information. The low accuracy of the coregistration between the visible and NIR images was a major issue of the system and prevented accurate resection of the tumor tissue in animal models.

In this paper, we present a compact and ergonomic fluorescence imaging system with a goggle display for simultaneous imaging of NIR and visible spectrum information and for real-time display of fused information to the health professional. The imaging portion of the system is composed of two CMOS imaging sensors placed on a custom printed circuit board (PCB). Spectral filters are placed on both imaging sensors such that one sensor is optimized for visible spectrum imaging and the other sensor is optimized for NIR spectrum imaging. The signal processing portion of the system is performed on either an FPGA or a PC, where a synthetic visible-NIR image is computed. The display portion of the system is composed of a video see-through goggle system. The entire system also utilizes a custom-built NIR-visible spectrum tracking pod that allows for automatic coregistration of both images. The system sensitivity and coregistration accuracy is presented as well as data from a small animal model. Finally, we discuss the advantages and limitations of the presented imaging system.

2.

System Setup and Threshold Detection Algorithm

2.1.

Overview of the Entire Imaging System

Figure 1 depicts the entire imaging system, which is composed of four distinct modules: (1) an image capture module, (2) an image processing module, (3) an image display module, and (4) a light source for simultaneous visible spectrum and NIR fluorescence imaging.

Fig. 1

Dual-spectrum imaging system with a goggle display. (a) The imaging system is composed of two complementary metal oxide semiconductor (CMOS) imaging sensors, long-pass and short-pass optical filters, an FPGA data acquisition and image processing board, and an HDMI goggle system; (b) a custom-built NIR and visible white light-emitting diode (LED) illumination module; and (c) an NIR-visible spectrum LED tracking pod.

JBO_20_1_016018_f001.png

The image-capture module is composed of two CMOS imaging sensors (Aptina MT9V034) housed in a custom image capture PCB. The custom image capture PCB board ensures that the imaging sensors are placed parallel right next to each other, and the distance between the two sensors is minimal. Placing the sensors close to each other ensures that the disparity between the two imaging sensors is small, and hence most of the imagers’ field of view can be used to capture visual information. One of the CMOS imaging sensors has a color Bayer pattern, while the second CMOS imaging sensor is monochromatic. The Bayer pattern imager is used to produce a color image, and the monochromatic sensor is used to capture NIR information. A high quantum efficiency of 35% at 800 nm for the CMOS imaging sensor allows for efficient imaging of fluorescence signals in the NIR spectrum. Spectral filters are placed in front of the imaging sensors: a short-pass filter with a cutoff wavelength of 690 nm (Semrock FF01-694) is integrated with the Bayer pattern imager, while a long-pass filter with a cutoff wavelength of 785 nm (Semrock BLP01-785R) is integrated with the monochromatic imager. Two polymer-based objective lenses (Sunex DSL208 15.8 mm F2.0) are used to minimize the weight and size of the complete imaging system. The weight of the entire system is 50g, and the size of the custom image capture PCB is 1.5 in. by 1 in. (see Fig. 1).

The two imaging sensors operate in a master-slave configuration. This mode of operation allows for data to be sent from the monochromatic/NIR imaging sensor to the color imaging sensor via a two-line low-voltage differential signaling (LVDS) serial bus. The color imaging sensor then combines the data from both sensors and produces a single 16-bit word that contains information from both sensors (the lower 8 bits of data correspond to the color imager and the upper 8 bits of data correspond to the monochromatic/NIR imager). The 16 bits of data are serialized and sent via a two-line LVDS bus to the image processing module, hence minimizing the number of wires utilized to transfer data from both sensors.

The image processing module is composed of a custom image processing PCB, an FPGA, and an optional PC. The custom image processing PCB is designed (1) to connect with the two imaging sensors and provide easy scalability for additional imaging sensors, (2) to mate with an FPGA board, and (3) to connect to the goggle display via an HDMI chip housed on the same custom image processing PCB board. The connection between the imager board and the image processing board is accomplished via a custom eSATAp connector. This connector allows for eight independent signals to be transferred between two entities and three shielding wires. We utilize the eight signals as follows: two lines for LVDS image data from both cameras, two lines for the external exposure control signals of the two imagers, two lines for sending and receiving control signals to and from the imagers (such as exposure time, region of interest, gain of the signal, and others) via a serial protocol, and two power lines (ground and 3.3 V). The shielding wires are grounded and are used to shield the LVDS data signals from external noise sources. The serial data from the two imaging sensors are deserialized in the Field-programmable gate array (FPGA) (Opal Kelly 3050, Xilinx Spartan 3). The FPGA board can either transfer the deserialized data to a PC via a USB 2.0 data link for image processing or execute the necessary image processing to provide a single combined NIR-visible spectrum image on the FPGA.

The image coregistration is accomplished via a software unit and a custom light-emitting diode (LED) tracking pod. The LED tracking pod contains two bright and thin LEDs (Fig. 2). One of the LEDs is centered at 850 nm (Everlight HIR19-21C), and the second LED covers the entire visible spectrum (Rohm Semi., SMLP12WBC7W). The LEDs are placed next to each other. Due to their minimal form factors, the LEDs are virtually collocated when viewed at a 50-cm distance. This allows for both NIR and color cameras to observe a single point in both the NIR and visible spectrums, respectively. Since extra bright LEDs are utilized in the tracking pod, the location of the LEDs in both images corresponds to pixels with saturated values. Note that the assumption here is that the rest of the image in both the visible and NIR spectrums is not saturated. The image processing algorithm, which can run either on the FPGA or the PC (described in the next section), computes the disparity between the NIR and visible spectrum images based on the tracking LEDs pods’ locations. An average disparity for the entire NIR image is computed when multiple tracking pods are utilized and applied on the entire image. A single combined image is computed by highlighting the pixel locations in the color image, where NIR pixels values are above a user-defined threshold.

Fig. 2

Block diagram of the NIR-visible imaging system. NIR-visible tracking pods are used to compute an average disparity between the two images. This disparity information is applied to the entire image to create a combined image in which the location of fluorescent data is highlighted on the color image data.

JBO_20_1_016018_f002.png

Since the FPGA is a compact device and can be easily carried in a pocket, the surgeon is free to move around the operating room, and the entire device does not impede the surgical flow. If the algorithm is executed on the PC, more optimization and advanced algorithms can be developed with significant time saving due to the easy implementation in the C++ programming language. Converting the same algorithm from C++ language to a hardware description language, such as Verilog or VHDL, is a very time-consuming task. Furthermore, images and videos captured from the imaging sensor can be recorded on the PC, which is not possible on the FPGA implementation at this point. The shortcoming of using a PC for image processing is that the surgeon would be tethered to a computer, which would restrict his or her movement in the operating room.

The image display module is composed of an HDMI decoder chip housed on the custom image processing PCB board and a goggle display device. The HDMI decoder chip receives display data from the FPGA and provides high-definition video data at 1080p to the display goggle unit. The state timing machine is optimized to display information on any device capable of handling HDMI inputs, such as LCD monitors, TVs, and projectors. We use a lightweight and ergonomic goggle display unit to display the final image to the health professional.

The illumination unit21 is composed for both NIR excitation and as a visible spectrum illumination source. The NIR excitation light source is constructed from 16 high-power 760-nm NIR LEDs (Roithner LaserTechnik H2A1-H760) covered with a spectral bandpass filter centered at 769±20nm (Edmund Optics 84–121).The spot size of the NIR light source is 20cm in diameter with an average optical power of 5mW/cm2. The NIR light source is placed 70 cm from the illuminated subject. The white light source is composed of a visible spectrum LED panel with an NIR blocking filter. The NIR blocking filter eliminates background signals when capturing the fluorescence image. Two separate NIR-visible light sources are used to provide uniform illumination to the animal in the experiment. The NIR sources are placed on either side of the imaging system as depicted in Fig. 2.

2.2.

Disparity Algorithm with Light-Emitting Diode Tracking Pods

Accurate computation of the disparity information between the NIR and color images is of paramount importance when generating a combined NIR-color image. Misregistration between these two images can create an incorrect assessment of the tumor location and can lead to both positive margins and damage to healthy tissue. In order to achieve high-precision overlay images, the locations of the LED tracking pods in both images are used to compute the disparity information for the entire image. The complete disparity algorithm is depicted in Fig. 3.

Fig. 3

Flowchart of the disparity image processing algorithm using NIR-visible tracking pods. The algorithm is implemented on both FPGA and PC for real-time display of combined NIR-color images to the surgeon.

JBO_20_1_016018_f003.png

The first step in this algorithm is calibrating both the NIR and color cameras. The main purpose of this step is to refer the NIR and color pixel’s coordinates from the image planes (M) to a camera reference coordinate system (m) while taking into account the optical, geometrical, and digital characteristics of the camera. This is achieved by multiplying the pixel coordinates (x,y,1) with a 3×3 matrix A, as shown in Eq. (1).

Matrix A represents the intrinsic parameters of the camera, where (u0, v0) are the pixel coordinates of the principal point, α and β are the scale factors for the coordinate system, and γ is the skew of the two orthogonal image axes. In order to compute the intrinsic camera parameters, images are obtained in both the NIR and visible spectrums of a black-and-white checkerboard pattern obtained from different viewing perspectives.

Eq. (1)

m˜=AM˜,whereA=[αγu00βv0001].

The next step in the image processing algorithm is to identify the location of the tracking pods in both the NIR and visible spectrum images. Since extra bright LEDs are used for the tracking pods, the pixels corresponding to the tracking pod LED will be saturated or will be close to saturation. Therefore, both NIR and color images use a threshold value corresponding to 95% of the dynamic range of the pixel. For example, since the maximum digital value of the pixel is 255, the threshold value is set to 242. It is important that the rest of the image is not saturated to prevent erroneous flagging of the tracking pod locations.

After the NIR and visible spectrum images are at the threshold, the locations of all pixels with values above the threshold are stored in separate lists for both spectrums. The locations of the pixels in the visible spectrum list are compared to the locations of pixels in the NIR list by performing a local search. The local search is limited to a pixel neighborhood of ±15pixels. The pixel location is marked as a valid point if a saturated pixel is found in both images. Otherwise, the pixel location is removed from the list. Finally, after all the points are searched, the disparity information between the NIR and color images is computed based on the weighted disparity average shown in Eq. (2). Therefore,

Eq. (2)

{dx=(α1dx1++αndxn)/ndy=(α1dy1++αndyn)/n,
where n is the number of valid points, αi is the coefficient of the distance ratio from the valid point i to the target pixel, dxi is the horizontal disparity of the valid pixel i existing in NIR and color images, and dyi is the vertical disparity for the valid pixel i in NIR and color scenes.

Since the NIR and visible spectrum images are placed next to each other on the custom image capture PCB, translation predominately accounts for the disparity between both the images generated from the sensors. Also, computing an average translation disparity between the two images is easily implemented on both FPGA and PC for real-time (27 fps) imaging. The disparity computation can be extended to include both an estimation of translation and rotation for a better and more accurate overlay between both images at the cost of higher computational complexity.

The disparity between the NIR and visible spectrum images is a function of depth. Since the two cameras view the same scene at different spectrums, stereo vision algorithms that estimate depth, and, therefore, disparity cannot be used for this application. The LED tracking pods allow the same point in space to be viewed in both color and NIR images, and hence estimate the disparity between the two images. The disparity information computed from the tracking pods has the highest accuracy at the depth where the tracking pods are located. Since the tracking pods are placed next to the subject that is imaged, part of the subject will be closer and part of it will be further away from the tracking pods, depending on the subject’s three-dimensional structure. Hence, the disparity will be different across the imaging plane and will introduce error in the overlay image when a single (global) disparity metric is employed.

The disparity error estimation is illustrated in Fig. 4. In this figure, the square depicts the location of the tracking pods that are accurately determined via the image processing algorithm described in the previous section. A global disparity estimate is used for all pixels in the image based on the location of these LED pods. The circle depicts part of the scene that is further from the tracking pods and the triangle depicts part of the scene that is closer to the imaging camera. These three points in space at depths DΔD, D, and D+ΔD will be projected to three different points on the imaging plane with different disparities [2(l+x1), 2l, and 2(lx2)]. By applying the triangular similarity theorem, we derive a closed form expression for the error estimate and present it in Eqs. (3) to (4). In Eq. (3), D is the distance between the sensor and the LED tracking pods, F is the focal length of the lens, d is the distance between the NIR and visible sensors, and l is the distance between the targeted pixel and sensor center. Using Eq. (3), the relationship between the disparity error estimate and the target depth is shown in Eq. (4), where ΔD is the depth difference from the initial position D, and ΔlNIR, Δlvisible are the corresponding distance changes on the NIR and visible sensor pixel arrays, respectively. Under a normal working distance, ΔD, d, and F are much smaller than D. Hence, error estimation e can be simplified to form a first-order linear relation with depth difference ΔD.

Eq. (3)

d2+lD=lFl=d2·F·1DF,

Eq. (4)

e=ΔlNIR+Δlvisible=d·F·|ΔD|(DF)·(D+|ΔD|F)[d·F(DF)2]·|ΔD|.

Fig. 4

Disparity error estimate illustration.

JBO_20_1_016018_f004.png

3.

Results

3.1.

Disparity Error Estimation

The disparity error between the NIR and color images is evaluated using a single tracking pod. The tracking pod is initially placed at either 45 or 65 cm to emulate two different working distances between the surgeon and the subject. The disparity at these working distances is computed and applied to all pixels in the image.

To emulate the fact that portions of the scene are closer or farther from the working distance, the tracking pod is moved ±6cm from the working distance. Since the LEDs on the tracking pod are minimal in size, a single point in space emits both white light and an NIR spectrum. The corresponding points of the LED tracking pods are determined from both images at different depths, and the disparity is computed. A disparity error measurement is computed by subtracting the global disparity at the working distance (45 or 65 cm) from the disparity of the tracking pod at various positions near the working distance (±6cm). The results are presented in Fig. 5.

Fig. 5

Disparity error measurement.

JBO_20_1_016018_f005.png

The measurements in Fig. 5 indicate that for a working distance of 65 cm, a maximum disparity error of 2 mm between the two images is registered at a distance of 6 cm away from the tracking pods. In other words, if the surgeon initially views the tissue at 65 cm, both NIR and visible images are perfectly co-registered. If the surgeon then moves and leans toward the subject by 6 cm (i.e., distance between surgeon and tissue is 59 cm), the NIR and visible spectrum images will have a 2-mm co-registration error. The co-registration error increases to 3 mm for working distance of 45 cm at a distance of 6 cm away from the tracking pods.

3.2.

Sensitivity

The sensitivity of the proposed imaging system was tested by recording the NIR fluorescent signal responses for different ICG concentrations and different LS30118,22,23 concentrations dissolved in 100% dimethyl sulfoxide (DMSO). ICG has been widely used for NIR fluorescence since its FDA approval in the late 1960s. LS301 is a tumor-targeted contrast agent developed at Washington University in St. Louis and is currently under preclinical development. The vials with different ICG and LS301 concentrations are placed 50 cm from the imaging sensors. A control vial with DMSO is also imaged and denoted as a background signal. Both visible and NIR imaging sensors are operating at 27 fps (i.e., a maximum exposure time of 30 ms). The exposure time for the color imaging sensor is typically around 1 ms due to the bright surgical LED light source. The NIR image sensor has a maximum exposure time of 30 ms to acquire the best signal-to-noise ratio fluorescence signal. The diluted samples are illuminated with a 780-nm excitation light source, and two different optical powers (5 and 10mW/cm2) are used for the test evaluation. The mean and standard deviation of the signal-to-background ratio (SBR) of the imaging system is calculated from a 30×30pixel region. The experiments are repeated with three different samples with the same ICG-DMSO and LS301-DMSO concentrations.

The sensitivity results from the imaging system are presented in Fig. 6. Figure 6(a) shows the signal intensity in log scale for different ICG and LS301 concentrations ranging from 500 pM to 50μM, respectively. Figure 6(b) presents the signal intensity in linear scale for the different ICG and LS301 concentrations, respectively. A higher illumination power increases the signal response from the same fluorescence sample. When the ICG and LS301 concentrations are higher than a certain threshold (500 nM for ICG and 1μM for LS301 under 10mW/cm2 illumination; 1μM for ICG and 10μM for LS301 under 5mW/cm2 illumination), the fluorescent signal intensity exceeds the dynamic range of the imaging sensor. Hence, the imager output is saturated. Figure 7(a) presents the SBR as a function of the concentration of ICG and LS301 in log scale, and Fig. 7(b) presents the same information in terms of linear scale, where the vial with 100% DMSO is used as the background control negative sample. Defining the detectability of a system to be SBR=2,13 the imaging system can detect 25 nM of ICG and 30 nM of LS301 under 10mW/cm2. Using an excitation illumination of 5mW/cm2, the minimum ICG detectability is 40 nM and 40 nM for LS301. On the other hand, as seen in Figs. 7(a) and 7(b), the higher excitation power leads to a lower SBR at higher concentrations, which occurs because the fluorescent signal is saturated while the background signal is higher. Therefore, selecting a proper optical power for the illumination module is always a tradeoff between keeping a high SBR and a high signal response.

Fig. 6

Sensitivity test using ICG-DMSO and LS301-DMSO: (a) logarithmic scale and (b) linear scale.

JBO_20_1_016018_f006.png

Fig. 7

Signal-to-background ratio (SBR); green line shows the SBR=2 threshold: (a) logarithmic scale and (b) linear scale.

JBO_20_1_016018_f007.png

Figures 6 and 7 also indicate that ICG achieves a higher signal response and SBR compared to LS301. The reason for this is that LS301 is a cypate-based contrast agent, which has a lower quantum efficiency than ICG. On the other hand, as a tumor-targeted contrast agent, LS301 will clear out from most animal organs 24 h after injection and only remains in the cancerous tissue. This effect leads to a much higher fluorescent contrast for imaging the cancerous tissue from the healthy tissue. Hence, LS301 is used in our in vivo study in mice to further validate the performance of the imaging system.

3.3.

In Vivo Study in Mice

The NIR/visible spectrum imaging system was validated through in vivo studies using a subcutaneous breast cancer mouse model. Four-to-six-week-old BalbC mice were given subcutaneous flank injections with 100,000 4T1Luc murine breast cancer cells. When the tumors were at least 10 mm in size, the mice were injected with 10μl of 60μM tumor-targeted NIR fluorescence contrast agent LS301, via the tail vein. Images were taken 24 h postinjection of LS301. During in vivo study, the image-capture module was set up at a 50-cm working distance, and the illumination module was placed at a 50-cm distance. All images were captured at 27 fps. During the imaging experiments, the mouse was kept anesthetized using a cocktail of ketamine/xylazine or through intubation of isofluorane. Following the imaging experiment, the mouse was sacrificed through cervical dislocation, and its organs were harvested, imaged, and preserved for histologic evaluation.

In Fig. 8, the mouse object and the LED tracking pod are placed in the same scene. Figure 8(a) presents the NIR image mapped using a false color map, demonstrating that the system can detect in vivo fluorescent signals from the tumor areas. A threshold value of 38 has been assigned to eliminate the background signal. Figure 8(b) presents the color image captured by the visible sensor. Without the disparity cancellation algorithm, the NIR and color images have a large disparity error, as shown in Fig. 8(c). Figure 8(d) depicts the final NIR-color image after the disparity algorithm is applied using the tracking LED pods. The tumor areas are clearly highlighted on the color image and accurately demarked after close visual evaluation of the final image.

Fig. 8

Mouse study test result: (a) near infrared (NIR) channel image, (b) visible channel image, (c) predisparity correction image, and (d) corrected image using a threshold detection algorithm.

JBO_20_1_016018_f008.png

Figure 9 shows the results of the same mouse object when the skin is deflected to reveal the tumors. The tracking LED pod allows for an accurate estimate of the disparity between the NIR and color images. The tumors are accurately highlighted in the final color image. Figures 8 and 9 contain the signals from the tracking LED pods, and they are shown as a red dot for the NIR image and a white dot for the visible image. The tracking signals were simultaneously recorded with the image of the mouse in order to determine the correct co-registration.

Fig. 9

Mouse study test result (open skin): (a) NIR channel image, (b) visible channel image, (c) predisparity correction image, and (d) corrected image using a threshold detection algorithm.

JBO_20_1_016018_f009.png

4.

Discussion

In this paper, we present a compact NIR fluorescence imaging system with an ergonomic goggle display system. By using the proposed threshold detection image overlay solution and CMOS imaging sensors, the camera module of the imaging system achieves a small footprint and is light weight, while maintaining high sensitivity at a real-time frame rate. The proposed threshold detection algorithm and miniature LED tracking pods are implemented to replace a complex and heavy dichroic optical setup so that the image overlay between NIR and color channels can be well maintained. This solution has two major advantages: the light weight of the entire imaging system and optical simplification. First, the optical part in the imaging system only contains optical lenses and spectral filters, which yield a compact and lightweight goggle device. Second, optical simplification reduces the number of optical parts contributing to the optical display of the proposed NFIS system, which leads to a similar or even better optical performance. Additionally, compared with the CCD imaging sensor commonly used in the existing NFIS, the CMOS imaging sensor leads to a much smaller form factor camera module, lower power consumption, and higher quantum efficiency in the NIR spectrum. The simplified peripheral circuitry further reduces the weight of the camera module.

To examine the performance of the proposed NFIS solution, we measured the sensitivity achieved as a function of fluorescence concentration under real-time frame rates and a standard working distance. The results show that the proposed NFIS running at 27 FPS can detect concentrations as low as 25 nM ICG-DMSO. Therefore, the proposed NFIS can achieve similar or even better sensitivity compare to other existing NFISs published in the literture,10 and its performance can be easily improved simply by applying better optical lenses. The in vivo study in mice also validates that the proposed NFIS solution is able to precisely highlight tumor margins while performing good image overlay. The downside of our solution is the disparity error brought by different depths between the LED tracking pod and the objective area. However, based on our error tolerance test, the proposed NFIS solution can stay within a 2-mm offset when the depth difference is less than 6 cm at a 65-cm working distance. Therefore, for regular surface operation and typical animal study, the offset is within the tolerance scale. Additionally, introducing this imaging equipment in the operating room requires sterility of all instruments. The LED tracking pod is in a shielded case composed of sterile acrylic. The image processing module will be encompassed in a sterile box, while the goggle and imaging sensor will not be covered in a sterile acrylic due to degradation in the optical performance, i.e., blurring of the image from the acrylic. This instrument will need to be wiped with alcohol and handled by an assistant when placing it on the head of a surgeon. This instrument is currently undergoing clinical trials and we have employed this procedure to ensure sterility of the instrument. As an alternative solution, the concept of a projection-based tracking pod used in surgical navigation systems24 is expected to be explored in our future research.

Finally, in this work, we present our efforts to push the NFIS development from a hardware-emphasized design to the concept of a software-hardware co-design, which not only takes advantage of the computational power of modern computers but also simplifies the necessary optical setup. The experimental measurements show that although our proposed threshold detection algorithm does not require much computational complexity on the programming side, it achieves good image overlay results, as well as a simplified optical setup. We believe that by exploring advanced imaging processing and computer vision algorithms (such as feature matching, correspondence computation, etc.), future image overlay algorithms will become tool free and more robust for intraoperative applications. Additionally, innovative optics25 and other imaging modalities26 are also going to be studied and incorporated with the proposed system for better sensitivity and overlay accuracy.

5.

Conclusion

In this paper, we present a compact NFIS with an image overlay solution based on threshold detection, which can be easily integrated with a goggle display system for intraoperative guidance. The proposed NFIS achieves NIR-visible image overlay with high precision and a real-time frame rate. In addition, the miniature and ultra-lightweight LED tracking pod is easy to incorporate with an NIR fluorescence imaging study. Based on experimental evaluation in a mice in vivo study, the proposed NFIS solution can achieve down to 25 nM ICG detectability at 27 FPS and realize a highly precise image overlay of NIR and visible images. The overlay error is limited within a 2-mm scale at a 65-cm working distance, which is highly reliable for clinical study and surgical use.

Acknowledgments

The research is funded by NIH R01 CA171651, R01 EB00811, and P50 CA094056.

References

1. 

R. WeisslederM. J. Pittet, “Imaging in the era of molecular oncology,” Nature, 452 (7187), 580 –589 (2008). http://dx.doi.org/10.1038/nature06917 NATUAS 0028-0836 Google Scholar

2. 

L. Jacobs, “Positive margins: the challenge continues for breast surgeons,” Ann. Surg. Oncol., 15 (5), 1271 –1272 (2008). http://dx.doi.org/10.1245/s10434-007-9766-0 1068-9265 Google Scholar

3. 

M. Badruddoja, “Ductal carcinoma in situ of the breast: a surgical perspective,” Int. J. Surg. Oncol., 2012 e761364 (2012). http://dx.doi.org/10.1155/2012/761364 20901402 Google Scholar

4. 

A. L. Vahrmeijeret al., “Image-guided cancer surgery using near-infrared fluorescence,” Nat. Rev. Clin. Oncol., 10 (9), 507 –518 (2013). http://dx.doi.org/10.1038/nrclinonc.2013.123 NRCOAA 1759-4774 Google Scholar

5. 

S. GiouxH. S. ChoiJ. V. Frangioni, “Image-guided surgery using invisible near-infrared light: fundamentals of clinical translation,” Mol. Imaging, 9 (5), 237 –255 (2010). MIOMBP 1535-3508 Google Scholar

6. 

M. V. Marshallet al., “Near-infrared fluorescence imaging in humans with indocyanine green: a review and update,” Open Surg. Oncol. J. Online, 2 (2), 12 –25 (2010). http://dx.doi.org/10.2174/1876504101002010012 Google Scholar

7. 

S. Luoet al., “A review of NIR dyes in cancer targeting and imaging,” Biomaterials, 32 (29), 7127 –7138 (2011). http://dx.doi.org/10.1016/j.biomaterials.2011.06.024 BIMADU 0142-9612 Google Scholar

8. 

E. R. Fossum, “CMOS image sensors: electronic camera on a chip,” in Proc. Int. Electron Devices Meet 1995 IEDM 95, 17 –25 (1995). https://doi.org/10.1109/IEDM.1995.497174 Google Scholar

9. 

S. L. Troyanet al., “The FLARE intraoperative near-infrared fluorescence imaging system: a first-in-human clinical trial in breast cancer sentinel lymph node mapping,” Ann. Surg. Oncol., 16 (10), 2943 –2952 (2009). http://dx.doi.org/10.1245/s10434-009-0594-2 1068-9265 Google Scholar

10. 

J. S. D. Mieoget al., “Toward optimization of imaging system and lymphatic tracer for near-infrared fluorescent sentinel lymph node mapping in breast cancer,” Ann. Surg. Oncol., 18 (9), 2483 –2491 (2011). http://dx.doi.org/10.1245/s10434-011-1566-x 1068-9265 Google Scholar

11. 

M. Takahashiet al., “SPYTM: an innovative intra-operative imaging system to evaluate graft patency during off-pump coronary artery bypass grafting,” Interact. Cardiovasc. Thorac. Surg., 3 (3), 479 –483 (2004). http://dx.doi.org/10.1016/j.icvts.2004.01.018 1569-9293 Google Scholar

12. 

N. Tagayaet al., “Intraoperative identification of sentinel lymph nodes by near-infrared fluorescence imaging in patients with breast cancer,” Am. J. Surg., 195 (6), 850 –853 (2008). http://dx.doi.org/10.1016/j.amjsurg.2007.02.032 AJOOA7 0096-6347 Google Scholar

13. 

S. B. Mondalet al., “Real-time fluorescence image-guided oncologic surgery,” 171 –211 (2014). Google Scholar

14. 

V. Venugopalet al., “Design and characterization of an optimized simultaneous color and near-infrared fluorescence rigid endoscopic imaging system,” J. Biomed. Opt., 18 (12), 126018 (2013). http://dx.doi.org/10.1117/1.JBO.18.12.126018 JBOPFO 1083-3668 Google Scholar

15. 

J. Glatzet al., “Concurrent video-rate color and near-infrared fluorescence laparoscopy,” J. Biomed. Opt., 18 (10), 101302 (2013). http://dx.doi.org/10.1117/1.JBO.18.10.101302 JBOPFO 1083-3668 Google Scholar

16. 

X. Wanget al., “Compact instrument for fluorescence image-guided surgery,” J. Biomed. Opt., 15 (2), 020509 (2010). http://dx.doi.org/10.1117/1.3378128 JBOPFO 1083-3668 Google Scholar

17. 

S. Giouxet al., “FluoSTIC: miniaturized fluorescence image-guided surgery system,” J. Biomed. Opt., 17 (10), 106014 (2012). http://dx.doi.org/10.1117/1.JBO.17.10.106014 JBOPFO 1083-3668 Google Scholar

18. 

Y. Liuet al., “Hands-free, wireless goggles for near-infrared fluorescence and real-time image-guided surgery,” Surgery, 149 (5), 689 –698 (2011). http://dx.doi.org/10.1016/j.surg.2011.02.007 SURGAZ 0039-6060 Google Scholar

19. 

Y. Liuet al., “Near-infrared fluorescence goggle system with complementary metal-oxide-semiconductor imaging sensor and see-through display,” J. Biomed. Opt., 18 (10), 101303 (2013). http://dx.doi.org/10.1117/1.JBO.18.10.101303 JBOPFO 1083-3668 Google Scholar

20. 

P. Shaoet al., “Designing a wearable navigation system for image-guided cancer resection surgery,” Ann. Biomed. Eng., 42 (11), 2228 –2237 (2014). http://dx.doi.org/10.1007/s10439-014-1062-0 ABMECF 0090-6964 Google Scholar

21. 

N. Zhuet al., “Engineering light-emitting diode surgical light for near-infrared fluorescence image-guided surgical systems,” J. Biomed. Opt., 19 (7), 076018 (2014). http://dx.doi.org/10.1117/1.JBO.19.7.076018 JBOPFO 1083-3668 Google Scholar

22. 

S. Achilefuet al., “Synthesis, in vitro receptor binding, and in vivo evaluation of fluorescein and carbocyanine peptide-based optical contrast agents,” J. Med. Chem., 45 (10), 2003 –2015 (2002). http://dx.doi.org/10.1021/jm010519l JMCMAR 0022-2623 Google Scholar

23. 

M. Y. Berezinet al., “Near infrared dyes as lifetime solvatochromic probes for micropolarity measurements of biological systems,” Biophys. J., 93 (8), 2892 –2899 (2007). http://dx.doi.org/10.1529/biophysj.107.111609 BIOJAU 0006-3495 Google Scholar

24. 

K. A. Gavaghanet al., “A portable image overlay projection device for computer-aided open liver surgery,” IEEE Trans. Biomed. Eng., 58 (6), 1855 –1864 (2011). http://dx.doi.org/10.1109/TBME.2011.2126572 IEBEAX 0018-9294 Google Scholar

25. 

N. Zhuet al., “Dual-mode optical imaging system for fluorescence image-guided surgery,” Opt. Lett., 39 (13), 3830 –3832 (2014). http://dx.doi.org/10.1364/OL.39.003830 OPLEDP 0146-9592 Google Scholar

26. 

T. Yorket al., “Bioinspired polarization imaging sensors: from circuits and optics to signal processing algorithms and biomedical applications,” Proc. IEEE, 102 (10), 1450 –1469 (2014). http://dx.doi.org/10.1109/JPROC.2014.2342537 IEEPAD 0018-9219 Google Scholar

Biography

Shengkui Gao received his BS degree in electrical engineering from Beihang University, Beijing, China, in 2008 and his MS degree in electrical engineering from the University of Southern California, Los Angeles, California, United States, in 2010. Currently, he is working toward his PhD degree in computer engineering at Washington University in St. Louis, Missouri, United States. His research interests include imaging sensor and system design, mixed-signal VLSI design, fluorescence-related image processing, and optics development.

Suman B. Mondal received his B. Tech. and M. Tech. degrees in biotechnology and biochemical engineering from the Indian Institute of Technology, Kharagpur, West Bengal, India. He is currently a PhD candidate in the Department of Biomedical Engineering at Washington University in St. Louis, Missouri, United States, pursuing his thesis research at the Optical Research Laboratory, under the mentorship of Dr. Samuel Achilefu. His research interests include image guided surgery, intraoperative imaging, and fluorescence imaging.

Nan Zhu received his MSc degree in physics of laser communication from Essex University, United Kingdom, in 2005 and his PhD degree in optical engineering from Beijing Institute of Technology, China, 2010. Now he is a postdoctoral research associate at the University of Arizona’s College of Optical Science, United States. His research interests include lens design, freeform optical, optomechanical design, biomedical optics, surgery guidance, and head-mounted display.

Rongguang Liang received his BS degree in optical engineering, his MS degree in applied optics, and his PhD degree in optical sciences from Zhejiang University, Rose-Hulman Institute of Technology, and University of Arizona in 1989, 1998, and 2001, respectively. He spent 16 years in the optics industry before he moved to his current position as an associate professor in the College of Optical Sciences, University of Arizona, in 2011. His research interests include optical design, metrology, imaging technologies, and biomedical optics. He is an SPIE fellow.

Samuel Achilefu received his PhD degree in molecular and materials chemistry from the University of Nancy, France, and completed his postdoctoral training at Oxford University, United Kingdom. He is a professor of radiology, biomedical engineering, and biochemistry & molecular biophysics at Washington University in St. Louis, Missouri, United States. He is the director of the Optical Radiology Laboratory and of the Molecular Imaging Center, as well as co-leader of the Oncologic Imaging Program of Siteman Cancer Center.

Viktor Gruev received his MS and PhD degrees in electrical and computer engineering from The Johns Hopkins University, Baltimore, Maryland, United States, in May 2000 and September 2004, respectively. After finishing his doctoral studies, he was a postdoctoral researcher at the University of Pennsylvania, Philadelphia, Pennsylvania, United States. Currently, he is an associate professor in the Department of Computer Science and Engineering at Washington University in St. Louis, Missouri, United States. His research interests include imaging sensors, polarization imaging, bio-inspired circuits and optics, biomedical imaging, and micro/nano fabrication.

© 2015 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2015/$25.00 © 2015 SPIE
Shengkui Gao, Suman B. Mondal, Nan Zhu, RongGuang Liang, Samuel Achilefu, and Viktor Gruev "Image overlay solution based on threshold detection for a compact near infrared fluorescence goggle system," Journal of Biomedical Optics 20(1), 016018 (20 January 2015). https://doi.org/10.1117/1.JBO.20.1.016018
Published: 20 January 2015
Lens.org Logo
CITATIONS
Cited by 12 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Near infrared

Imaging systems

Luminescence

Light emitting diodes

Visible radiation

Goggles

Optical sensors

Back to Top