Upper gastrointestinal endoscopies are primarily performed to observe the pathologies of the esophagus, stomach, and duodenum. However, when an endoscope is pushed into the esophagus or stomach by the physician, the organs behave similar to a balloon being gradually inflated. Consequently, their shapes and depth-of-field of images change continually, preventing thorough examination of the inflammation or anabrosis position, which delays the curing period. In this study, a 2.9-mm image-capturing module and a convoluted mechanism was incorporated into the tube like a standard 10- mm upper gastrointestinal endoscope. The scale-invariant feature transform (SIFT) algorithm was adopted to implement disease feature extraction on a koala doll. Following feature extraction, the smoothly varying affine stitching (SVAS) method was employed to resolve stitching distortion problems. Subsequently, the real-time splice software developed in this study was embedded in an upper gastrointestinal endoscope to obtain a panoramic view of stomach inflammation in the captured images. The results showed that the 2.9-mm image-capturing module can provide approximately 50 verified images in one spin cycle, a viewing angle of 120° can be attained, and less than 10% distortion can be achieved in each image. Therefore, these methods can solve the problems encountered when using a standard 10-mm upper gastrointestinal endoscope with a single camera, such as image distortion, and partial inflammation displays. The results also showed that the SIFT algorithm provides the highest correct matching rate, and the SVAS method can be employed to resolve the parallax problems caused by stitching together images of different flat surfaces.
A 2D detector array is used popularly to acquire image in spatial and spectral dimension simultaneously for
hyperspectral imager. The detector array will be malfunctioned gradually and partially after long-term operations.
These defective CCDs will cause vertical stripes in images. But it's not cost effective to replace the detector due to a few
of defects. In this article, we propose an algorithm including two parts for hyperspectral image restoration. One is the
CCDs defect parts detection according to their radiance deviation, and another is the image restoration based on
inter-band radiance interpolation using Lagrange polynomial.
The detection process of finding CCDs defect parts for an imager must be conducted periodically to update the CCDs
health status. HOPE images with simulated defective CCDs of various performance decay level are applied for
validation. We found the accuracy for images with homogeneous ground feature is higher than the ones with
non-homogeneous feature. And defect CCDs with performance decay of 10% still can be designated precisely.
Restoration accuracy of pixel radiance is presented for various spectral bands using proposed algorithm. We also
perform the image reconstruction using interpolation of spatial neighboring pixels. Radiance deviation for restored
pixels is compared between both methods. Proposed algorithm can handle the images taken by hyperspectral imager
with adjoining defective CCDs both in spatial and spectral. However, the method using interpolation of neighboring
pixels can't. Applying the purposed algorithm on hyperspectral images, the imager can continue operating like a good
one though there are a few of defects in detector.
Because of aircraft vibration, pixel discontinuity (blank pixels) occurs frequently in ortho-images when using a top-down
approach and a nearest neighboring resampling method for pushbroom images. In this paper we propose a scheme to
handle the pixel discontinuity. The pixel discontinuity is induced by the attitude variation in pitch and heading of an
airplane. The deviation of pixel locations needs to be analyzed first to check if the proposed scheme is applicable. We use
a linear CCD imager installed on a stabilizer to filter out the high frequency noise in an airplane for image acquisition.
This scheme is suitable since the angular rates of the pitch and heading are both within ±0.7°/sec statistically and the
deviation of the pixel locations is less than 1.0 pixel.
The proposed scheme includes the following steps: (1) Derive the pixel locations of ortho-images using a top-down
approach. Then allocate the pixel values to its 4 neighboring grids by an inverse bilinear interpolation based on their
weighted factors in ortho-images. (2) After completing the ortho-rectification of full images, perform the dynamic range
adjustment on the ortho-images according to the maximum pixel value of raw images and ortho-images. After applying
the proposed scheme, we find that the pixel discontinuity is removed and the image quality is improved substantially.
The difference of pixel value between raw images and ortho-images is also presented at regions with low, middle and
high radiance to evaluate the proposed scheme quantitatively.