KEYWORDS: 3D image processing, Cameras, 3D acquisition, Endoscopy, Endoscopes, Field programmable gate arrays, Stereoscopy, Structured light, 3D modeling, 3D image reconstruction
In this paper, we present a 3D surface imaging technique to re-engineer traditional endoscopy into a quantitatively capable instrument. In our design concept, we demonstrate that, by utilizing two fiber bundle channels and structured light based 3D reconstruction principle, we can obtain the 3D information on a GI tract surface in all most real time fashion. The two fiber bundles are used for pattern light projection and image capture. The implied significance of our design and experiment includes: (a) It is possible to convert the traditional 2D video based endoscope into a 3D endoscope through minimum modifications; and (b) The proposed 3D endoscope allows clinicians to obtain the actual size information on any target of interest during the procedures.
Capsule endoscopy (CE) uses a miniature on-board camera in a pill for imaging gastrointestinal (GI) tract. It has provided a non-invasive and non-ionization way for gastroenterologists to diagnose GI tract diseases. However, CE has major drawbacks such as ineffective forward-looking field of view (FOV), abundant data, and lengthy viewing and interpreting time, significantly lowering the chance of finding a GI disease through the video screening process. We present a concept of utilizing full spherical field of view imaging for easy visualization. Built on camera pose tracking and 3D algorithms under spherical viewing field of view (FOV), through immersive display or VR, the technology is shown to allow clinicians to visualize interested pathological structures at finger tips on a VR device. Initial test results on phantoms show that our design is feasible.
In this paper, we present a thermal imaging based app to augment traditional appearance based wound growth monitoring. Accurate diagnose and track of wound healing enables physicians to effectively assess, document, and individualize the treatment plan given to each wound patient. Currently, wounds are primarily examined by physicians through visual appearance and wound area. However, visual information alone cannot present a complete picture on a wound’s condition. In this paper, we use a smartphone attached thermal imager and evaluate its effectiveness on augmenting visual appearance based wound diagnosis. Instead of only monitoring wound temperature changes on a wound, our app presents physicians a comprehensive measurements including relative temperature, wound healing thermal index, and wound blood flow. Through the rat wound experiments and by monitoring the integrated thermal measurements over 3 weeks of time frame, our app is able to show the underlying healing process through the blood flow. The implied significance of our app design and experiment includes: (a) It is possible to use a low cost smartphone attached thermal imager for added value on wound assessment, tracking, and treatment; and (b) Thermal mobile app can be used for remote wound healing assessment for mobile health based solution.
In this paper, we present a novel and clinically valuable software platform for automatic ulcer detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos take about 8 hours. They have to be reviewed manually by physicians to detect and locate diseases such as ulcers and bleedings. The process is time consuming. Moreover, because of the long-time manual review, it is easy to lead to miss-finding. Working with our collaborators, we were focusing on developing a software platform called GISentinel, which can fully automated GI tract ulcer detection and classification. This software includes 3 parts: the frequency based Log-Gabor filter regions of interest (ROI) extraction, the unique feature selection and validation method (e.g. illumination invariant feature, color independent features, and symmetrical texture features), and the cascade SVM classification for handling "ulcer vs. non-ulcer" cases. After the experiments, this SW gave descent results. In frame-wise, the ulcer detection rate is 69.65% (319/458). In instance-wise, the ulcer detection rate is 82.35%(28/34).The false alarm rate is 16.43% (34/207). This work is a part of our innovative 2D/3D based GI tract disease detection software platform. The final goal of this SW is to find and classification of major GI tract diseases intelligently, such as bleeding, ulcer, and polyp from the CE videos. This paper will mainly describe the automatic ulcer detection functional module.
In this paper, we present a novel and clinically valuable software platform for automatic bleeding detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos for GI tract run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. As a result, the process is time consuming and is prone to disease miss-finding. While researchers have made efforts to automate this process, however, no clinically acceptable software is available on the marketplace today. Working with our collaborators, we have developed a clinically viable software platform called GISentinel for fully automated GI tract bleeding detection and classification. Major functional modules of the SW include: the innovative graph based NCut segmentation algorithm, the unique feature selection and validation method (e.g. illumination invariant features, color independent features, and symmetrical texture features), and the cascade SVM classification for handling various GI tract scenes (e.g. normal tissue, food particles, bubbles, fluid, and specular reflection). Initial evaluation results on the SW have shown zero bleeding instance miss-finding rate and 4.03% false alarm rate. This work is part of our innovative 2D/3D based GI tract disease detection software platform. While the overall SW framework is designed for intelligent finding and classification of major GI tract diseases such as bleeding, ulcer, and polyp from the CE videos, this paper will focus on the automatic bleeding detection functional module.
In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on
panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases
such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and
processed CE video that is easy to analyze in real time. The burden on physicians’ disease finding efforts is thus big. In
fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per
second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization
method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video
stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we
work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic
image generation based on optical lens principle leading to real-time data visualization will be presented. In addition,
non-rigid panoramic image registration methods will be discussed.
In this paper, we report a novel three-dimensional (3D) wound imaging system (hardware and software) under
development at Technest Inc. System design is aimed to perform accurate 3D measurement and modeling of a wound
and track its healing status over time. Accurate measurement and tracking of wound healing enables physicians to assess,
document, improve, and individualize the treatment plan given to each wound patient. In current wound care practices,
physicians often visually inspect or roughly measure the wound to evaluate the healing status. This is not an optimal
practice since human vision lacks precision and consistency. In addition, quantifying slow or subtle changes through
perception is very difficult. As a result, an instrument that quantifies both skin color and geometric shape variations
would be particularly useful in helping clinicians to assess healing status and judge the effect of hyperemia, hematoma,
local inflammation, secondary infection, and tissue necrosis. Once fully developed, our 3D imaging system will have
several unique advantages over traditional methods for monitoring wound care: (a) Non-contact measurement; (b) Fast
and easy to use; (c) up to 50 micron measurement accuracy; (d) 2D/3D Quantitative measurements;(e) A handheld
device; and (f) Reasonable cost (< $1,000).
KEYWORDS: 3D image processing, Imaging systems, 3D modeling, Stereoscopic cameras, Cameras, Optical filters, In vivo imaging, Diffuse optical tomography, 3D acquisition, Luminescence
A crucial parameter in Diffuse Optical Tomography (DOT) is the construction of an accurate forward model, which
greatly depends on tissue boundary. Since photon propagation is a three-dimensional volumetric problem, extraction and
subsequent modeling of three-dimensional boundaries is essential. Original experimental demonstration of the feasibility
of DOT to reconstruct absorbers, scatterers and fluorochromes used phantoms or tissues confined appropriately to
conform to easily modeled geometries such as a slab or a cylinder. In later years several methods have been developed
to model photon propagation through diffuse media with complex boundaries using numerical solutions of the diffusion
or transport equation (finite elements or differences) or more recently analytical methods based on the tangent-plane
method . While optical examinations performed simultaneously with anatomical imaging modalities such as MRI
provide well-defined boundaries, very limited progress has been done so far in extracting full-field (360 degree)
boundaries for in-vivo three-dimensional DOT stand-alone imaging. In this paper, we present a desktop multi-spectrum
in-vivo 3D DOT system for small animal imaging. This system is augmented with Technest's full-field 3D cameras. The
built system has the capability of acquiring 3D object surface profiles in real time and registering 3D boundary with
diffuse tomography. Extensive experiments are performed on phantoms and small animals by our collaborators at the
Center for Molecular Imaging Research (CMIR) at Massachusetts General Hospital (MGH) and Harvard Medical School.
Data has shown successful reconstructed DOT data with improved accuracy.
We have successfully developed an innovative, miniaturized, and lightweight PTZ UCAV imager called
OmniBird for unmanned air vehicle taking off and landing operations. OmniBird is developed through a SBIR funding
from NAVAIR. It is to fit in 8 in3. The designed zoom capability allows it to acquire focused images for targets ranging
from 10 to 250 feet. The innovative panning mechanism also allows the system to have a field of view of +/- 100
degrees. Initial test results show that the integrated optics, camera sensor, and mechanics solution allow the OmniBird to
stay optically aligned and shock-proof under harsh environments.
Through a SBIR funding from NAVAIR, we have successfully developed an innovative, miniaturized, and
lightweight PTZ UCAV imager called OmniBird for UCAV taxiing. The proposed OmniBird will be able to fit in a
small space. The designed zoom capability allows it to acquire focused images for targets ranging from 10 to 250 feet.
The innovative panning mechanism also allows the system to have a field of view of +/- 100 degrees within the provided
limited spacing (6 cubic inches). The integrated optics, camera sensor, and mechanics solution will allow the OmniBird
to stay optically aligned and shock-proof under harsh environments.
In this paper, we introduce Genex's innovative multiple target tracking system (i.e. SmartMTI algorithm and our miniature DSP/FPGA data processing hardware). SmartMTI is designed for intelligent surveillance on moving platforms such as UAVs (unmmaned Aerial Vehicle), UGV (unmanned ground vehicle), and manned moving platforms. It uses our state-machine MTI framework to seamlessly integrate our state-of-the-art motion detection and target tracking methods to create multiple target following and inter-object 'awareness', thus allowing the system to robustly handle difficult situations such as targets under merging, occlusion, and disappearing conditions. Preliminary tests show that, once implemented on our miniaturized DSP/FPGA hardware, our system can detect and track multiple targets in real time with extremely low miss-detection rate. The SmartMTI design effort leverages Genex's expertise and experience in real-time surveillance system design for the Army's AMCOM's SCORPION or "Glide Bomb" program, NUWC's CERBERUS program, BMDO's missile seeker program, Air Force's UAV auto-navigation and surveillance program, and DARPA's Future Combat System (FCS) program.
Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes).
Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.