PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819401 (2011) https://doi.org/10.1117/12.905273
This PDF file contains the front matter associated with SPIE Proceedings Volume 8194, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819402 (2011) https://doi.org/10.1117/12.900558
We investigated a negative feedback method for adding functionality to a CMOS image sensor. Our sensor effectively uses the method to set any intermediate voltage into a photodiode capacitance while a pixel circuit is in motion. The negative feedback reset functions as a noise cancellation technique and can obtain intermediate image data during charge accumulation. As an above application, dynamic range compression is achieved by individually selecting pixels and by setting an intermediate voltage or performing quasi-holding with respect to each pixel. Additionally, we achieved duplicated interlaced processing and were able to output frame-difference images without frame buffers. The experimental results obtained with a chip fabricated using a 0.25-μm CMOS process demonstrate that dynamic range compression and intra-frame motion detection are effective applications of negative feedback resetting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819403 (2011) https://doi.org/10.1117/12.901070
We are presenting a review of the solid state photon counters developed in our laboratories and their applications in
various space related projects. The solid state photon counters are the avalanche photodiode structures which are
operated in so called Geiger mode, they are pulse biased above their breakdown voltage. In such a stage the first charge
carrier generated by a photon or some noise event will trigger an avalanche multiplication of carriers and hence the
macroscopic current on the output contacts. The external circuit is used to terminate the avalanche and to set up the bias
for the new detection. We have prepared these photon counters on the basis of the common semiconducting materials
Silicon, Germanium, SiGe mixture, GaAs, GaP, and GaAsP. The most attractive for applications in space projects is the
structure on Si we have prepared using the K14 technology. They have several interesting features in comparison to
similar structures prepared in other groups: they do allow detection both single or multiple photon signals and still
maintaining picosecond timing resolution and detection delay stability. The detection semiconducting structures are
highly tolerant with respect to radiation, what makes the extremely attractive for space applications. For satellite laser
ranging we are providing detector package having quantum efficiency reaching 40% at the wavelength of 532 nm, its
timing resolution is reaching 20 to 5 picoseconds for the detected signal strength of single to 3000 photons per pulse.
The detection delay is stable within 10 ps over an entire dynamical range and the background photon flux reaching 30
Mc/s. The implementation of these detectors into the satellite laser ranging network (along with appropriate laser and
timing technologies) resulted in the ranging precision to the space objects on the millimeter level. For the space
missions to planets we have prepared photon counting detectors for operation in laser altimeter and atmospheric Lidar.
Recently the operation of the Laser Time Transfer experiments on board of the Chinese navigation satellites Compass
and on board the French satellite Jason-2 is relying on our photon counters as well. The plans for the future space
application will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819404 (2011) https://doi.org/10.1117/12.900970
Based on the photon noise fluctuation theory and the linear filter theory, the two-dimension performance model for
human eye will be established in this paper, which is denominated as "the photon detector and linear filter synthesis
performance model" or "wave-particle duality performance model". Two-dimension threshold resolution angle and two
dimension universal apparent distance detecting equation for human eye will be studied and derived on the large-scale
luminance level. The relationship between the threshold detecting theory for human eye and the improved Johnson
criteria will be established and the new number of the resolvable circles across the target and background for detection,
recognition and identification will be put forward. All of these are coincident with the visual theory and threshold
characteristics of the human eye as well as many actually measured data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819405 (2011) https://doi.org/10.1117/12.900122
A nonuniformity correction and radiometric calibration algorithm for infrared focal plane array is presented, combined
with two-point correction along the U-shaped blackbody rim. The format of Infrared Focal-Plane Array (IRFPA) is larger
and larger now; however, due to technical limitations and material defects in production, the drift of the IRFPA response
during their working is unavailable. It will leads to non-uniformity of the thermal imaging systems which has become an
important affect element of the efficiency for the practical use of the thermal imaging equipments. Point to the problems
of traditional radiation calibration and correction methods, we proposed a dynamic infrared calibration and correction
technology using U-shaped blackbody. With the help of blackbody in low and high temperature, two-point correction is
executed initially to perimeter detectors. Then based on the scene information and shift between adjacent frames, a
special algebraic algorithm is proposed to transport correction parameters from perimeter detectors to those interior
un-corrected ones. In this way, the correction parameters of the whole field of view (FOV) are calculated. The
temperature of the U-shaped blackbody is controllable, so dynamic infrared calibration can be done after nonuniformity
correction to modification the drift of the original calibration table. A U-shaped blackbody is designed and an
experimental platform is built to evaluate the algorithm. The U-shaped perimeter blackbody is designed to be able to
scale out periodically so as to continuously update the correction parameters. It proves to be able to achieve two-point
correction for accuracy, without covering the central FOV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xiaofeng Li, Fan Yu, Kaijun Song, Yaohong Qian, Ji Tan, Wenbo Yan
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819406 (2011) https://doi.org/10.1117/12.899743
This paper describes the measuring method of fluorescence for membrane layer of Na2kSb antimony-alkali compound
under glass window and quartz window excited by the laser of wavelengths of 514.5 nm and 785nm. Through comparing
the fluorescence intensity of Na2KSb antimony-alkali compound under glass window and quartz window respectively
when excited by laser of wavelength of 514.5 nm, it is found that the fluorescence intensity of Na2kSb layer excited with
wavelength of 785nm is more suitable for the analysis of membrane layer of Na2ksb antimony-alkali compound under
glass window and quartz window. By excitation with the laser in wavelengths of 514.5 nm and 785nm, peak wavelengths
of fluorescence spectrum of membrane layer of Na2KSb antimony-alkali compound are measured as 898nm and 872nm
and the corresponding transition energy levels are 1.42eV and 1.38eV respectively. Through the analysis of fluorescence
spectrum of membrane layer of Na2KSb antimony-alkali compound, it is proved that photoluminescence is an effective
tool for the research of membrane layer of Na2KSb antimony-alkali compound. Besides, further increasing the control
precision of components of Na2KSb, the synthesis process, and the material structure, will further increase the sensitivity
of Na2KSb multi-alkali photocathode, and consequently further improve the performance of devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Jingsheng Pan, Jingwen Lv, Tao Zheng, Yanhong Li, Wei Xu
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819407 (2011) https://doi.org/10.1117/12.894613
Extension of Microchannel Plate (MCP) to a bulk conductive substrate was considered to be an effective approach to
eliminate ion feedback problem, and a vanadium iron lead phosphate glass had been identified can be tailored to have
appropriate volume conductivity and suitable for MCP fabrication. In this paper, a new reformulated vanadium iron lead
alumina phosphate glass was used to fabricate a bulk conductive glass MCP, the fabrication process is in the same way
as the conventional lead silicate glass MCP fabrication, but in the absence of a hydrogen firing treatment, although it was
succeed in fabricating some experimental samples of 25mm diameter full active area MCP with 10μm pore diameter and
40:1~60:1 length to diameter ratio, the experimental sample also demonstrated its bulk conductivity and certain
secondary electron emission property, but its gain is very low, especially its mechanical strength is insufficient. The
physical and chemical properties of this vanadium iron lead alumina phosphate glass, and the performance and behavior
of this glass during the bulk conductive glass MCP fabrication process, as well as its experimental sample test results
were detail described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819408 (2011) https://doi.org/10.1117/12.895275
Image processing technology is used to process the transmission electron microscope (TEM) pictures for improving the
definition of the TEM images, giving prominence to the characteristics of crystal materials and obtaining useful crystal
structure information. The language we have used is Matlab.7.0. To improve the articulation of the TEM images,
different method is used according to the characteristics of the pictures, including contrast self adapting histogram
homogenizing, filter noises and so on. Fourier transformation is used for analyzing the structure of nanocrystal materials.
Edge detection method is used to enhance the granular character. Detecting the intensity distribution method is used for
distinguishing the nanocrystal tube and nanocrystal granule. It can also be used to analyze decentrality and homogeneity
of the nanocrystals granular. In order to measure the dimension of the nanocrystal more precisely, we distinguish the
points with larger changes in grey level by using the function "edge".
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Liangku Wang, Chengjin Li, Xunjie Zhao, Xiaoli Liu
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819409 (2011) https://doi.org/10.1117/12.895522
The purpose of image fusion is to obtain an
iamge from multiple images, this image should be able to
reflect the important information of all original images.
Contourlet transform, not only has characteristics of multiresolution locality and critical sampling which wavelet
has but also has the characteristics of multiple
decomposition directions and anisotropy which wavelets
lacking. Energy is a statistical parameter of describe the
texture feature. So we apply the Max Energy and
Contourlet transform combined for image fusion. Entropy
expreses the average amount of information. The distribution of
standard deviation reflects the degree of dispersion of the
image.The average gradient reflects the clarity of the image, the
contrast of small details and the feature of texture transform.
Contrast with wavelet transform, laplace transform,
weighted transform, the traditional of contourlet transform,
on evaluation by Entropy, standard deviation and average
gradient, experimental results from this algorithms for
fusion with infrared image and visual image were better
than other algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940A (2011) https://doi.org/10.1117/12.895528
A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was
introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear
motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the
mathematical model of image degradation was established with the transcendental information of multi-frame images,
and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set
accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF
estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that
the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between
TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940C (2011) https://doi.org/10.1117/12.896294
Traditional videometric approach cannot be used to measure the pose deformation of objects in a large viewing field or
of non-intervisible objects in the large structures, however, pose-relay videometric using camera series or camera
network can be used to overcome the difficulties. It is a usual practice to fuse the pose data by using the constrained
conditions abounding in the camera network in order to improve the measurement precision. This article first provides a
brief introduction to the principle underlying the method of camera network videometric and an analysis of its constraint
conditions in light of the graph theory; then it proposes and experiments on an adjustment-based data fusion method in
the pose relay videometric using camera network; finally, it manifests a numerical simulation on the proposed method.
The results show that it is able to effectively suppress noises and improve the measurement precision because they can
take full advantages of the constraint conditions intrinsic to the camera network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940D (2011) https://doi.org/10.1117/12.896734
Research the specific applications of rolling
shutter of CMOS image sensor with CMOS image sensor.
First, this paper introduces the principle and characteristics
of global shutter and rolling shutter of the CMOS imager, it
analyzes the impact of rolling shutter on measurement precision of the imaging system based on CMOS imager. Imaging experiment is taken to test the analyses of the
rolling shutter. Then, an original method for computing
instantaneous 3D pose and velocity of fast moving objects
using a single view is presented. It exploits image
deformations induced by rolling shutter in CMOS image
sensors. Finally, a general perspective projection model of a
moving 3D point is presented. A solution for the pose and
velocity recovery problem is then described. Results indicate
that some aberrations appear in faith, and the aberration
degree has close relations with some parameters of CMOS
imager like integration. After experiments can minimize
error in the case of moving objects by the pose and speed
parameters, the calculation error is under 2.5 percent.
Experimental results with real data confirm the relevance of
the approach. The resulting algorithm enables to transform
a CMOS low cost and low power camera into an original
velocity sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Limei Song, Hongwei An, Xiaoxiao Dong, Chunbo Zhang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940E (2011) https://doi.org/10.1117/12.896973
This paper using on-line identification of three dimensions to solve some difficult problems of two dimensional
defects identification. Different defects have different 3D structural features, thus to identify and classify defects
based on 3D testing data. Compared with fabric defects processed by two-dimensional image, 3D identification
can more exclude cloth wrinkles and the flying thick silk floss. So the 3D identification is of high accuracy and
reliability to identify fabric defects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940F (2011) https://doi.org/10.1117/12.897331
A method of defogging images is introduced for contrast degraded of outdoor thin fog color image. Firstly, wavelet
transform decomposed an image, then low-frequency using the atmospheric scattering simply model to unsharpe
masking high-pass filter, and image reconstruction after using the nonlinear transform to the high-frequency images;
Finally using the single scale retinex algorithm and color resuming to improve brightness and strengthen color
performance. Experimental results show that this algorithm is effective and practical, the effect is ideal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Qingduo Duanmu, Guozheng Wang, Ye Li, Yang Wang, Hongchang Cheng, Xulei Qin, Zhenhua Jiang, Delong Jiang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940G (2011) https://doi.org/10.1117/12.897420
A Silicon microchannel arrays with the very high aspect ratio was prepared by the photo-assisted electrochemical
etching process. The mechanism of silicon anisotropy etching, the process parameters, the inducing pit arrays and the
channel morphology were investigated, and the condition of etching current density for steady microchannel growth was
discussed. The continuous SiO2 thin film dynode was fabricated by LPCVD process. The insulation, conductive and
electron emission layer of the dynodes were studied and prepared. We obtained the samples of silicon microchannel
plate with 25 mm of the plate diameter, 4-6 μm of channel side size, 1-2 μm of the channel space, more than 40 of aspect
ratio, 7° channel bias angle, and 165 of the electron gain at 680V working voltage. The experimental study on silicon
microchannel plate indicates that the process of Silicon microchannel plate in this paper is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940H (2011) https://doi.org/10.1117/12.897512
A system for short-range millimetre wave(MMW) active imaging was developed, including transceiver antenna,
scanning system, transceiver front-end, signal processing. A target within a few meters or even a few centimeters can be
imaged. The overall structure of the imaging system and imaging method were researched. The short-range scattering
imaging formula was derived from the spectral distribution shift view, which can simplify the method. Phase
compensation factor was introduced to improve the imaging resolution. The relationship between the sampling
frequency and scanning speed was analyzed to optimize the system parameters, which can improve image quality and
system efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Limei Song, Chunbo Zhang, Yiying Wei, Xiaoxiao Dong, Hongwei An
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940I (2011) https://doi.org/10.1117/12.898606
The code flags are designed to address the image matching problem in the close-range
photogrammetry. However, the traditional designs are complex, large computation, and low
identifiable. What is more, they can not meet the requirement of the measurement of large areas, large
objects and complex objects. A new code flag method based on coordinate quadrant in vision
measurement is designed to meet the urgent need of the accuracy and the efficiency in this paper. First
of all, the code flags are rectangle and designed with black background and white flags, in which there
are three white circles that are placed in a certain position, and at least one but not more than 4
white-based arcs that are intercepted on the same ring. The largest circle of the three white circles is in
the center of the pattern, and the other two are same size, one of them is near from the center circle, while the other one is a little farther away from the center circle, and the two lines between each smaller circle's center and the center of center circle is a vertical connection. Then a Cartesian coordinate system is set up under the location of solid circles in the pattern. Next encode from the first quadrant in clockwise order in the built Cartesian coordinate system. And finally the design of code flag based on coordinate quadrant is implemented according to the algorithm. Compared with the traditional flags, the code flag method based on coordinate quadrant in vision measurement in this paper is simple, easy and fast to the information extraction. And there are various advantages, such as small noise and other interference factors, low-intensity work, accurate identification of flags and precise position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940J (2011) https://doi.org/10.1117/12.898741
In order to know performance of transmission-mode photocathode module completely, the GaAs photocathode with
a structure of Glass/Si3N4/Ga1-xAlxAs/GaAs was prepared in the experiment and the reflectance and transmittance spectra were measured by the spectrophotometer. Meanwhile optical constants of the GaAs active layer and the Ga1-xAlxAs window layer in the photocathode are discussed by using piecewise polynomial fitting method. On this basis of analysis on the optical constants, the theoretical reflectance, transmittance and absorptivity of cathode module are
calculated and revised with the aid of matrix formula in thin film optics. The thickness of each layer in the module is
obtained by fitting the reflectance and transmittance curves simultaneously. The results indicates that the thicknesses
of three thin films except Glass are respectively 110.14 μm, 1007.20 μm, 1480.81 μm with the relative curve error less than 5%, meanwhile the error of the module thickness in total is also controlled within 5%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940K (2011) https://doi.org/10.1117/12.898751
The gray distribution and contrast of the optical satellite remote sensing imagery in the same kind of
ground surface acquired by sensor is quite different, it depends not only on the satellite's observation and the sun
incidence orientation but also the structural and optical properties of the surface. Therefore, the objectives of this
research are to analyze the different BRDF characters of soil, vegetation, water and urban surface and also their
BRDF effects on the quality of satellite image through 6S radiative transfer model. Furthermore, the causation of
CCD blooming and spilling by ground reflectance is discussed by using QUICKBIRD image data and the
corresponding ground image data. The general conclusion of BRDF effects on remote sensing imagery is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shao-hua Yang, Ming-an Guo, Bin-kang Li, Jing-tao Xia, Qunshu Wang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940L (2011) https://doi.org/10.1117/12.898908
It is hard to achieve the speed of hundreds frames per second in high resolution charge coupled device (CCD) cameras,
because the pixels' charge must be read out one by one in serial mode, this cost a lot of time. The multiple-port CCD
technology is a new efficiency way to realize high frame rate high resolution solid state imaging systems. The pixel
charge is read out from a multiple-port CCD through several ports in parallel mode, witch decrease the reading time of
the CCD. But it is hard for the multiple-port CCDs' video processing circuit design, and the real time high speed image
data acquisition is also a knotty problem. A 16-port high frame rate CCD video processing circuit based on Complex
Programmable Logic Device (CPLD) and VSP5010 has been developed around a specialized back illuminated, 512 x
512 pixels, 400fps (frames per second) frame transfer CCD sensor from Sarnoff Ltd. A CPLD is used to produce high
precision sample clock and timing, and the high accurate CCD video voltage sample is achieved with Correlated Double
Sampling (CDS) technology. 8 chips of VSP5010 with CDS function is adopted to achieve sample and digitize CCD
analog signal into 12 bit digital image data. Thus the 16 analog CCD output was digitized into 192 bit 6.67MHz parallel
digital image data. Then CPLD and Time Division Multiplexing (TDM) technology are used to encode the 192 bit wide
data into two 640MHz serial data and transmitted to remote data acquisition module via two fibers. The acquisition
module decodes the serial data into original image data and stores the data into a frame cache, and then the software
reads the data from the frame cache based on USB2.0 technology and stores the data in a hard disk. The digital image
data with 12bit per pixel was collected and displayed with system software. The results show that the 16-por 300fps
CCD output signals could be digitized and transmitted with the video processing circuit, and the remote data acquisition
has been realized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ming-an Guo, Qun-shu Wang, Shao-hua Yang, Jing-tao Xia, Feng-rong Sun, Cai Liu
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940M (2011) https://doi.org/10.1117/12.898938
The high resolution and large image area of long-range digital imaging system is designed by using a KAF-16803 area
CCD. The long-range digital imaging system can avoid the radiation to the operator. The imaging system consists of the
clock sequencer generation unit, the CCD drive circuits unit of clock sequencer, the video signal processing unit,
high-speed data optical fiber transmission unit, high-speed data acquisition unit and software. The performance of this
system are as the following: 4kx4k; 100% fill factor; 16bit A/D; exposure time flexibility adjusted range from 5μs to 5s;
single module optical fiber transmission, 40km; These features make the long-range digital imaging system for
applications in Astronomy, Industrial, Security and life sciences fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yijun Zhang, Jun Niu, Jijun Zou, Yajuan Xiong, Benkang Chang, Junju Zhang, Yujie Du
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940N (2011) https://doi.org/10.1117/12.898952
With an attempt to improve the photoelectron emission efficiency, a gradient-doping structure proposed based on the
Spicer's three-step model has been applied to the preparation of the transmission-mode GaAs photocathode via
molecular beam epitaxy technique. The Cs-O activation phenomenon suggests that the gradient-doping structure can
bring a potential photoemission capability with the increase of activation time, and the spectral response curves show
that the gradient-doping photocathode can obtain a higher response capability in the entire waveband region, especially
in the regions of short-wavelength threshold and long-wavelength threshold. By fitting quantum yield curves, the
obtained cathode performance parameters such as electron average diffusion length and electron escape probability of the
gradient-doping photocathode are greater than those of the uniform-doping one. The electron average diffusion length of
the gradient-doping photocathode achieves 3.2 μm. The improvement in cathode performance of the gradient-doping photocathode could be ascribed to the downward gradient band-bending structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940O (2011) https://doi.org/10.1117/12.898980
Analyze the advantages and disadvantages and applying condition of the existing object
detecting algorithm, aims to body's variation and real-time and dynamic, combing with Histogram of Gradient statistic method, classify organ of level-connect mechanism and leaning algorithm based on Adaboost of face-detecting Boosted Cascad algorithm, to realize the object detect in car aided driving system. References the sift algorithm rationale and algorithm tracing steability to match and trace the object by characteristic in local area, eliminate the unuse information, simple the search area and speed the tracing. By practical video testing, this method does well in passengers and cars real-time detecting and tracing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940P (2011) https://doi.org/10.1117/12.899034
In this paper, a real-time pipeline centroid calculating structure based on programmable logic devices is designed for
Hartmann wavefront sensor with horizontal multi-channel pixel output. The pipeline consists of modularized cells
including multiplier groups, accumulation cells, dividers and corresponding control units. The structure is specially
designed to deal with simultaneously output pixels which belong to two adjacent subapertures as well as those pixels
belong to the same subaperture. When the number of output channels is 8 and pixels output at 80MHz clock frequency,
centroid calculation latency in simulation is less than 0.5μs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Y. Y. Zhu, Y. Wei, J. F. Shen, Y. T. Li, H. X. Dou
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940Q (2011) https://doi.org/10.1117/12.899037
In order to achieve high-precision, controllable rotation of uniaxial crystal particles, we studied the theory and found the
various factors that had an effect on optical rotation of uniaxial crystal, such as: the reflection of light beam on the crystal
plane, the laser power, the thickness and radius of the particle,the reflective index and transmission index of the light
beam, the phase contrast between the ordinary and extraordinary rays and so on. Based on the above analysis, the
theoretical model of optical rotation was reconstructed,moreover, the optical rotation of calcium carbonate and silica
particles chosen as experimental material were simulated and calculated, and the rationality of our theoretical model was
testified by comparing with previously experimental data. The results indicated that the theoretical model provided even
more accurate and reasonable theoretical supports for the experiment and application of optical rotation of uniaxial
crystal particle. Our studies had great directive significance to optical driven micro-mechanical motor design and the
material selection of rotor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940R (2011) https://doi.org/10.1117/12.899171
TDI-CCD can improve the sensitivity of space camera without any degradation of spatial resolution which is widely
used in aerospace imaging devices. The article describes the basic working principle and application characteristics of
TDI-CCD devices, analyses the composition of TDI-CCD imaging noise, and propose a new method to analyze
TDI-CCD imaging noise with statistical probability distribution. In order to estimate the distribution of gray values affect
by noise we introduced the concept of skewness and kurtosis. We design an experiment using constant illumination light
source, take image with TDI-CCD working at different stage such as stage 16, stage 32, stage 48, stage 64 and stage 96,
analyse the characteristics of image noise with the method we proposed, experimental results show that the gray values
approximately meet normal distribution in large sample cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940S (2011) https://doi.org/10.1117/12.899291
Current distortion correction systems can not meet the requirements of the large-field optical display equipment
because of small field, low resolution, poor real-time property and commonality. "The symmetrical transform" and "the
improved bilinear interpolation" were proposed. The general system scheme was designed and implemented in the
Virtex-5 FPGA devices. The appropriate data structure of the look-up table was adopted and the optimized scheme for
the input memory named "the double even-odd cache" was put forward. MIG (Memory Interface Generator) software
tool was utilized to control DDR2 SDRAM and DSP48E was used. The real-time distortion correction system of the
large-field optical display equipment was accomplished. The experimental result shows that the correction system can
correct the large-field and high-resolution (1280x1024) video image (60 frames per second). The system delays only
1.48ms while the deviation in precision is less than 9' and has the well commonality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940T (2011) https://doi.org/10.1117/12.899307
This measurement system uses delaminated measurement principle, measures the three perpendicular direction
values of the entities. When the measured entity is immerged in the liquid layer by layer, every layer's image are
collected by CCD and digitally processed. It introduces the basic measuring principle and the working process of the
measure method. According to Archimedes law, the related buoyancy and volume that soaked in different layer's depth
are measured by electron balance and the mathematics models are established. Through calculating every layer's weight
and centre of gravity by computer based on the method of Artificial Intelligence, we can reckon 3D coordinate values of
every minute entity cell in different layers and its 3D contour picture is constructed. The experimental results show that
for all the homogeneous entity insoluble in water, it can measure them. The measurement velocity is fast and non-destructive test, it can measure the entity with internal hole.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940U (2011) https://doi.org/10.1117/12.899376
The blade is a key component of the aero-engine. As the blade must have a precise size, accurate shape, the
three-dimensional profiling measurement of the blade is very important. Its complexity and diversity bring considerable
difficulty to the measurement. The optical triangulation method is used in the profiling measurement of the blade in the
paper. The coded technique based on gray-code combined with the phase-shift method is used. The three-dimensional
point cloud of blade is obtained in this method. A high accuracy of three-dimensional profiling measurement of the blade
is achieved, and the measurement accuracy reaches 0.05 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940V (2011) https://doi.org/10.1117/12.899446
In this paper, two TM images of Suzhou were used to extract the change area by using the multi-band KL
transform. The keys of this research are the preprocessing of the images, band combination and the combination of the
transformed components. Experimental results show that the method joined the information of two images, made the
changed information obvious, improved the detection accuracy and was less affected by the noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940W (2011) https://doi.org/10.1117/12.899462
In this paper, we put forward a new method of multiple exposures high dynamic range color imaging
with a RGB camera based on a piecewise color characterization model. The construction of the piecewise color characterization
model, and the method of images capturing, the color tone mapping and the images combination based on this
piecewise color characterization model were introduced. By using an ordinary color camera NikonD70s, we demonstrate
our new method can obtain desired high dynamic range images with standard color information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940X (2011) https://doi.org/10.1117/12.899575
In order to measure accurately position of object in remote large area with vision measurement method, a special layout
of vision measurement system was designed. Some calibration objects which have the same height with cameras were
placed at specific position in the measurement area. By adjusting carefully the posture of cameras, those calibration
objects were imaged at the same pixel row which contained the camera principal point. As a result, the master planes of
cameras which were determined by the pixel row which contained the principal point and the optical axis of cameras
were horizontal and coplanar. So, the u pixel coordinate of any space point was same with the one of its projection point
in the xoy coordinate plane and a simplified calculation model was derived. Compared with the traditional vision
measurement method, this method is simpler and more suitable for large remote area measurement. In order to verify the
validity of this method, a verification experiment had been made. The experimental results show that the resolving
accuracy of this model is very high, and this measurement method is very good to meet the requirement of large remote area measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940Y (2011) https://doi.org/10.1117/12.899590
How to distinguish the camouflage from its natural background is a challenging problem in target detection. As sensing
technology advances, more and more information can be extracted from the scenes of interest. This includes spatial
information captured by cameras, spectral information retrieved from spectrometers, and polarimetric information
obtained by polarimeters. Spatial, spectral, and polarimetric information reveal the different characteristics of objects and
background. While the spectral information tend to tell us about the distribution of material components in a scene,
polarimetric information tells us about surface feature, shape, shading, and roughness. Polarization tends to provide
information that is largely uncorrelated with spectral and intensity images, thus has the potential to enhance many fields
of optical metrology. However, both spectral and polarimetric detection systems may suffer from substantial false alarms
and missed detection because of their respective background clutter. Since polarimetric and multispectral imaging can
provide complementary discriminative information, to distinguish the camouflage target from its natural background, in
this paper the visible and near infrared polarimetric information is jointly utilized using imagery fusion technology. A
polarimetric imagery fusion algorithm was first proposed based on polarized modified soil adjusted vegetation index to
distinguish objects under vegetable environment. Then, the spectral and polarimetric information was fused by using
false-color mapping and fuzzy c-means clustering algorithm for more robust object separation. Experimental results have
shown that better identification performance was achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81940Z (2011) https://doi.org/10.1117/12.899604
For the purpose of security inspection, an alternating current (AC) radiometer is used in passive THz Wave (0.1THz)
imaging for the first time in china. Giving a comparison of structure and spectrum characteristic between direct current
(DC) radiometer and AC radiometer, we discussed the AC radiometer imaging mechanism and the noise image disposal
method based on the medium filtering. Simulating the requirement of safety inspection in airports, ports etc, a 2-D
imaging experiment of the person with canceled object has been did indoor, by THz Wave AC radiometer mechanical
scanning. The results show that THz Wave AC radiometer possesses higher sensitivity and can be used to inspect the
concealed metal objects hidden in the passenger body and their luggage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819410 (2011) https://doi.org/10.1117/12.899653
Being an efficient method of information fusion, image fusion has been used in many fields such as
machine vision, medical diagnosis, military applications and remote sensing. In this paper, Pulse Coupled Neural
Network (PCNN) is introduced in this research field for its interesting properties in image processing, including
segmentation, target recognition et al. and a novel algorithm based on PCNN and Wavelet Transform for
Multi-focus image fusion is proposed. First, the two original images are decomposed by wavelet transform. Then,
based on the PCNN, a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in
each frequency domain as the linking strength, so that its value can be chosen adaptively. Wavelet coefficients
map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then
all pixels of image get the ignition. So, the output of PCNN in each iteration time is ignition wavelet coefficients
of threshold strength in different time. At this moment, the sequences of ignition of wavelet coefficients represent
ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image
gray-scale range, which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are
obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the
firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore, by this
algorithm, the threshold adjusting constant is estimated by appointed iteration number. Furthermore, In order to
sufficient reflect order of the firing time, the threshold adjusting constant αΘ is estimated by appointed iteration
number. So after the iteration achieved, each of the wavelet coefficient is activated. In order to verify the
effectiveness of proposed rules, the experiments upon Multi-focus image are done. Moreover, comparative results
of evaluating fusion quality are listed. The experimental results show that the method can effectively enhance the
edge details and improve the spatial resolution of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819411 (2011) https://doi.org/10.1117/12.899657
With the rapid development of the aerospace camera and remote sensing application technologies, CCDs, as a
photoelectric sensor, have evolved from line array toward area array CCDs especially large area array CCDs which can
realize a broader coverage and avoid the technical difficulties from jointing multiple small area array CCDs in addition
to improving resolution. Large area array full-frame transfer CCDs are introduced in the dissertation with focuses on the
driving time sequence and working modes. Large area array CCDs, due to the great number of pixels, requires intensive
driving power and a readout rate up to 20MHz, which cannot be met by the driving circuit of ordinary line array CCDs.
So, time sequence signals generated by FPGA are amplified by a high power MOS driver to meet the driving demand of
large area array CCDs. Moreover, traditional driving circuit is improved according to the driving signal waveform
theory, thus the output quality of CCD analog signals is enhanced. Large area array CCD's output features high
resistance and high DC level. In order to improve the load capacity and anti-interference capability of the output signals,
operational amplifiers, selected based on the working voltage and signal bandwidth of the output signals, are applied as
the buffer. The correctness of the design is verified through software simulations and circuit tests. According to the test
results, the high-speed driving circuit can satisfy the application in large area array CCDs with a readout rate up to
20MHz and good quality of simulation signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819412 (2011) https://doi.org/10.1117/12.899658
The polarization image fusion enhancement is the method which generates the enhanced fusion image with the
redundancy and complementary between the polarization parameters images. The fusion polarization image has a higher
contrast and signal to noise ratio. The detail of image is better than the polarization parameters images. Based on the
analysis of polarization imaging principle, the method of the polarization image enhancement has been researched. First,
the polarization image fusion method which is based on the modulation in space domain has been researched and
developed. Second, the advantages and disadvantages of this method have been analyzed, and multi-scale analysis has
been introduced. A new polarization image fusion method which is based on modulation in multi-scale space has been
presented. The result of experiment shows that the fusion images can better characterize the polarization information of
different goals and scenarios. The result image can make the target detection, recognition, and other further processing
easier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819413 (2011) https://doi.org/10.1117/12.899741
Existing studies have paid much attention to stray light of lenses. However, the ghost reflection inside
FPA (Focal Plane Array) detector is almost ignored which is, in fact, also an unnegligible source of
stray light in digital imaging systems. Ghosting between window surfaces and CCD photosensitive area
is often a major source of stray light in FPA detectors, it may lead to image blur, color distortion, and
contrast reduction. Besides, diffraction pattern can be observed from front-illuminated CCD as the
incident beam can be diffracted by its polysilicon electrode gates at the surface. In this paper, we study
the generation mechanism and the reduction approaches to such stray light. Both front-illuminated and
back-illuminated CCDs (Charge Coupled Device) are investigated in our study. We build models to
identify stray light paths and predict window ghost image characteristics. Furthermore, three methods,
i.e., anti-reflective coating method, fluid-filled method and deconvolution method are presented to
decrease the stray light. The first anti-reflective coating method can be effectively used in back-thinned
CCDs to reduce the reflection and to maximize quantum efficiency. The second fluid-filled method
attempts to reduce the effects of unwanted light contamination by simulating some characteristics of the human eyes. The use of liquid can reduce the fresnel reflectance of interface. In addition, the acquired images contaminated by FPA's stray light are post-processed with the deconvolution method. Effectiveness of our proposed methods is verified with experiments. It is shown that stray light of FPA can be efficiently reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819414 (2011) https://doi.org/10.1117/12.899856
With the development of space remote sensors, the urgent needs of large area array optical detectors increase strongly. To
address the conflict between the resolution and the detector size, the most straightforward approach is to use a larger
CCD chip, but a single small area array CCD chip cannot meet the needs of the focal plane, therefore, the method of
CCD mosaic must be used. This paper presents a new method of splicing, it's a twice-imaging scheme. The first image is
formed on an imaging plate of intercepting optical component, and then four separate CCD cameras are used to produce
the segmented images on the final image plane. So a CCD focal plane mosaic can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819415 (2011) https://doi.org/10.1117/12.899865
CCD technology has kept developing since CCD was invented and has very wide application in all kinds of imaging
field. In the fields of security checking, non-destructive testing, and industry detection, CCD makes it possible that the
digital radiography appears and accelerates the x-ray imaging performance improvement. So in this paper, CCD
technology was introduce and its development was analyzed, then how it affect the x-ray imaging performance was
conducted through the mathematical theory was model, which includes the pixel size and the pixel amount although the
cooling condition is very important. This paper can be valuable and referential for designing an x-ray imaging system
and it is the same for other kinds of imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819416 (2011) https://doi.org/10.1117/12.899886
Low light level target detection has received more attentions in varieties of domains in recent years. In this paper we use
hybrid optoelectronic joint transform correlator(HOJTC) for detecting and recognizing low light level target. It is thought
to be one of the most effective methods in target detection. But because of the cluttered backgrounds and strong noises of
the low light level target, it always can not be detected successfully. In order to solve this problem efficiently, firstly we
choose sym4 wavelet function to achieve the purpose of wavelet de-noising. After that edge extraction processing is used
to distinguish the useful target from the cluttered backgrounds with Sobel operator. At last processed targets can be put
into HOJTC to obtain a pair of correlation peaks clearly. To prove this method, many experiments of low light level
targets have been implemented with computer simulation method and optical experiment method. As an example a low
light level image "deer" is presented. The results show that the low light level target can be detected from the cluttered
backgrounds and strong noises with wavelet de-noising and Sobel operator successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819417 (2011) https://doi.org/10.1117/12.900007
CCD (Charge Coupled Device) is the most popular detector for camera to detect low levels of light for
wavelengths from 300nm to 1100nm. Contemporary CCD has read noise level equivalent to a few electrons,
and Well capacity over 100,000 electrons .In order to take full advantage of these characteristics, it needs the
dynamic of ADC must exceed the dynamic of the CCD. That is the number of bits provided by the ADC must
exceed 16 bits. While the high reliability and inexpensive 16-20 bit A/D converter is few. In this paper, we
firstly analyze CCD noise and then present the principle to extend the dynamic of the CCD in signal
processing chain using two low resolution ADC with different sensitivity. At last we present a concrete
example of improving the resolution of the ADC is by tow parallel low resolution ADCS with developing software to suitably process the converted analog-to-digital signal to achieve the same effect as a higher resolution ADC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819418 (2011) https://doi.org/10.1117/12.900010
The monocular vision odometry simplifies the hardware and the software as opposed to the stereo vision odometry, but it
also has defect. When the vehicle is in motion, the camera's attitude changes inevitably, what lead that the method's
performance degrades. To solve this problem, we proposed a monocular visual odometry based on the inverse
perspective mapping (IPM). Attitude of the camera is monitored in real time by the attitude sensor when the vehicle is
moving. Then the images of road surface photographed by camera became top view by using the IPM algorithm, after
that, the characters of images can be calculated by the Speeded Up Robust Features (SURF) algorithm. By the random
sample consensus (RANSAC) algorithm, the amounts of translation and rotation between two adjacent images can be
concluded. Accordingly, the movement distance and the course of the vehicle can be worked out. In order to test the
ranging accuracy of the method, both static and dynamic experiments were implemented. Static experiment showed that
the average accuracy of ranging of this method achieved 1.6%. Dynamic experiment showed that the ranging accuracy
achieved 6%, and the heading measurement error is less than 1.3°. Therefore, the method proposed in this paper is easy
to operate, time-efficient, low cost, and the accuracy of the method in ranging and heading measurement are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819419 (2011) https://doi.org/10.1117/12.900106
The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to
the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air
vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low
accuracy. According to these characteristics, a method is developed to determine the location of ground moving target
which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for
terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed
method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial
measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the
ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around
the target in the current image, The features which are on the same plane with the target in this region are extracted, and
are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with
aircraft status. The altitude of object, which is calculated by using position information of these aided features,
combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian
estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF)
provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location
which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the
estimated mean and covariance of target location from aircraft states and aided features location, and then exports them
for the moving target Kalman filter(KF). Experimental results show that our method can instantaneously geo-locate the
moving target by operator's single click and can reach 15 meters accuracy for an MAV flying at 200 meters above the ground.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xiao-guo Xiao, Ming-wu Ao, Chun-ping Yang, Ruo-fu Yang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941A (2011) https://doi.org/10.1117/12.900127
Based on Talbot effect and Moiré fringes of Ronchi gratings which is defined as Moiré Deflection Technology (MDT),
lenses' power is calculated. Moiré Deflection Technology can be used for non-contact measurement of phase objects and
surfaces and MDT is more widely used in the lenses' power measurement. The power of measured lenses can be figured
out accurately according to the relationship between moiré fringes tilting angle and the tested lens's power. Area array
CCD recorded moiré fringe image generated by two gratings mechanical interference to measure lenses' power. After
appropriate digital image processing such as gray scale equality, image enhancement, image thinning etc, the moiré
fringes image is processed by thinning to single pixel width. The pixels of the moiré fringes are labeled to fitting lines to
calculate the slope coefficients. However, the moiré fringe image appears to have an unequal light distribution. Based on
this phenomenon, a binary processing method, which is based on background gray-scale extension, was proposed. At
first, a statistical method is used to calculate gray values from a sampling image with block-based processing. Then the
linear interpolation is employed to generate new gray values instead of image pixels to obtain a background image and
the moiré fringe image should be corrected by the obtained background image. At last, binarized the image by 2D OTSU
threshold algorithm. The experimental results show that the method is simple and effective to segment the moiré pattern
from the original moiré fringe image. This method can be used to improve precision of lenses measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941B (2011) https://doi.org/10.1117/12.900131
In this paper, we develop a robust vision-based
approach for real-time traffic data collection at nighttime.
The proposed algorithm detects and tracks vehicles through
detection and location of vehicle headlights. First, we extract
headlights candidates by an adaptive image segmentation
algorithm. Then we group headlights candidates that belong
to the same vehicle by spatial clustering and generate vehicle
hypotheses by rule-based reasoning. The potential vehicles
are then tracked over frames by region search and pattern
analysis methods. The spatial and temporal continuity
extracted from tracking process is used to confirm vehicle's
presence. To handle problem of occlusions, we apply Kalman
Filter to motion estimation. We test the algorithm on the
video clips of nighttime traffic under different conditions.
The experimental results show that real-time vehicle
counting and tacking for multi-lanes are achieved and the
total detection rate is above 96%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Houzhi Cai, Jinyuan Liu, Xiang Peng, Lihong Niu, Wenda Peng, Li Gu, Jinghua Long
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941C (2011) https://doi.org/10.1117/12.900135
The method of exposure time measurement for the microchannel plate gated X-ray framing camera is presented in this
paper while the propagation on the microstrip line of the gating pulse is considered. The delay time of the fiber images is
analyzed, which is 2 ps, 10 ps or 18 ps in different situations. While the propagation time of the gating pulse is
considered, the measured exposure time of the framing camera is 96 ps, comparing to 53 ps without considering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941D (2011) https://doi.org/10.1117/12.900137
X-ray detector based on the gated microchannel plate (MCP) is a powerful diagnostic tool for laser-driven inertial
confinement fusion and fast Z-pinch experiments. In order to understand the behavior of the MCP used in such detector,
the X-ray detector is simulated using the Monte Carlo method. By simulating the electron cascade in the MCP, the
relationship between the MCP gain and voltage is obtained. The time, position and energy of the electrons at the MCP
output surface are calculated. The transit time distribution, the electron-channel wall collision number distribution and
the time distribution of the electrons travel from the MCP to the phosphor screen are given. Spatial resolution
simulations of the MCP-based detector are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941E (2011) https://doi.org/10.1117/12.900139
The rotation angle of a mounted polarizer in front of a camera has a direct effect on imaging quality and therefore
this paper presents a rapid computation method for a polarizer's optimal rotation angle on an airborne optical platform.
The computation contains four steps. First, we construct a world coordinate system and a camera coordinate system that
both adopt the center of a code disc as their common origin. Second, we take the origin of the world coordinate system as
a start point, intercept a unit segment along the sunlight direction and compute the endpoint coordinates of the unit
segment in the world coordinate system. Third, by mapping the relation from the world coordinate system to the camera
coordinate system, we compute the above endpoint coordinates in the camera coordinate system. Fourth, we project the
above segment towards a disc code plane, compute the angle between the projected line and the reference of the code
disc, and take the resultant angle distance as a polarizer's optimal rotation angle of airlight rejection utilizing polarization
filtering. Experiment results indicate that our computation method of a polarizer's optimal rotation angle can be applied
to airlight rejection on an airborne optical platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941F (2011) https://doi.org/10.1117/12.900140
Micro-Digital Sun Sensor (μDSS) is a sun detector which senses the respective angle between a satellite and the sun.
It is composed of a solar cell power supply, a RF communication block and a CMOS Image Sensor (CIS) chip,
which is called APS+. The paper describes the implementation of a prototype of the μDSS APS+ processed in a
standard 0.18μm CMOS process. The μDSS is applied for micro or nano satellites. Power consumption is a very
rigid specification in this kind of application, thus the APS+ is optimized for low power consumption. This character
is realized by a specific pixel design which implements profiling and windowing during the detection process. The
profiling is completely fast and power efficiently by a "Winner Take ALL (WTA)" principle. The measurement results shows that the APS+ achieves a reduction of power consumption by more than a factor 10 compared to state of-the-art. Besides the low power consumption, the APS+ also proposes a quadruple sampling method which improves thermal noise with 3-T Active Pixel image Sensor (APS) structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941G (2011) https://doi.org/10.1117/12.900142
Zoom imaging systems have the tendencies of miniaturization or complication so the traditional glass / plastic lenses
can't meet the needs. Therefore, a new method, liquid lens is put forward which realizes zoom by changing the shape of
liquid surface. liquid zoom lenses have many merits such as smaller volume, lighter weight, controlled zoom, faster
response, higher transmission, lower energy consumption and so on. Liquid zoom lenses have wide applications in
mobile phones, digital cameras and other small imaging system. The electrowetting phenomenon was reviewed firstly
and then the influence of the exerted voltage to the contact angle was analysed in electrowetting effect. At last, the
surface free energy of cone-type double liquid zoom lens was researched via the energy minimization principle. The
research of surface free energy offers important theoretic dependence for designing liquid zoom lens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941H (2011) https://doi.org/10.1117/12.900175
A novel focusing method of the remote sensing camera is proposed in this paper. To evaluate the quality of the
image obtained by the aerial camera, an assessing function was constructed based on the Wavelet transform. And the
characteristic of the CCD mosaicing structure was taken use to solve the problem that the evaluating values can not be
compared for the images captured by the aerial camera are variable. On the basis of the wavelet evaluating function, the
quality of the image was assessed. Then the CCD mosaicing structure was utilized to perform auto-focusing process, a
simulation was made to validate the novel method in the end. There are three major contributions in the paper. Firstly, the
weights of the wavelet coefficients in the evaluating function were set according to the characteristic of the Wavelet
transform and human vision system (HVS). By those, the assessing result is close to our subjective feel and insensitive to
noise. Secondly, in order to make the function adaptive to images with different high-frequency components, the
properties of the wavelet basis were analyzed. By comparing the evaluating effect of different images, the wavelet basis
with the best effect is symlet2 and the level of decomposing is three. Finally, by making use of the CCD mosaicing
structure, the problem that auto-focusing of the aerial camera can't use digital images processing directly was solved, and
the region with the highest frequency component was chosen as the evaluating area so as to improve the sensitivity of the function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941I (2011) https://doi.org/10.1117/12.900180
Microchannel plate (MCP) is a core part of the X-ray framing camera. Studying the dynamic characteristics of the MCP
is critical to understanding the data obtained by the framing camera. The dynamic characteristics of the MCP with
different dc bias voltage are simulated by using the Monte Carlo method. The relationship between the theoretical
exposure time and the MCP bias voltage is obtained. A MCP gated X-ray framing camera is developed. The measured
exposure time increase 9 ps while the MCP bias voltage is -300 V comparing to -200 V. The simulation and experimental
results show that the exposure time and gain of the MCP are both increased while the negative bias voltage is increased.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941J (2011) https://doi.org/10.1117/12.900187
High resolution of remote sensing image is impressible on varying pitch angle of satellite
platform on orbit. The geometry quality of image is distorted, and image has geometrical warp.
Consequently, the spatial distribution of image is changed. However, traditional simulation methods of
geometric distortion are complex. Traditional methods are based on accurate physical model. The pixel
positions of warp image are calculated as one by one pixel. The topological mapping relationship is
analyzed, which is between earth coordinate and optical remote sensor coordinate. The method of active
points is proposed. Positions of active points are computed through the transform relationship between
earth coordinate and optical remote sensor coordinate. Active points are interpolated by polynomial
interpolation. The geometrical distortion is sub-pixel precision. Finally, a frame of image is generated. The
effective transform reduces vastly amount of computation. The geometry model contains interior and
exterior orientation elements of imaging system on satellite platform. The simulation experiment is based
on three axes. Various angles of three axes are included by proposed model. As a result, the boundary
condition of motion error affecting imaging quality is analyzed. The proposed geometry model not only
improves physical information of active points, but also reduces computational complexity of transform
between earth coordinate and optical remote sensor coordinate. The result is beneficial to design and
optimize parameters of satellite platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941K (2011) https://doi.org/10.1117/12.900191
The mechanics properties of the materials in which displacement sensors or strain gauges can not be intalled or pasted
are unable to be measured by the experiment of traditional tension. In view of the case, the new computer tracking
system of uniaxial tension based on the digital image correlation method has been developed. Firstly, according to the
principle of uniaxial tension, the computer tracking system is designed by combining the loading installation, light
source, camera lens, image card with the computer. Secondly, the correlativity is high between the original image and
the deformed image, the image correlation formula is utilized to calculate the correlation coefficients of pixel values
between the object template and the search region. Moreover, the measurement precision can be improved greatly by
using the algorithm of bilinear inter value. Finally, through the computer tracking experiment of uniaxial tension of the
rubber band, the object template size of 11× 7 and the search region of 21 × 17 are used to improve the computer
calculating speed in the matching processing. The experimental results show that the object can be successfully tracked
and the deformation evoluation of the rubber band are agree with the actual mechanics properties of materials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941L (2011) https://doi.org/10.1117/12.900195
The linear array CCD camera is the main sensor on the push-broom satellite. Because of the difference response among
the CCD detectors, the striping noise is an obvious phenomenon in the remote sensing image along the scanning direction,
which can seriously affect the image quality and quantitative application. The object of relative radiometric calibration is
to eliminate them.
As the state of satellite electronics varies from orbit to orbit, an automatic de-striping algorithm is needed that depends
only on information that can be attained from the image data. There are many published techniques that are used to
remove striping from images such as the histogram matching, histogram equalization, and Fourier transform filter
methods.
In order to decrease the effect we try to remove these stripes in CCD images using a relative radiometric correction
algorithm based on the adaptive filtering pattern in this paper. Firstly, aiming at the characteristics of strip noise in
push-broom scanner, the cause of strip noise formation is described. The suitable 1-D nonlinear filter is chosen to remove
the obvious stripping based on the stripping distribution. Then, 1-D smoothing filtering is used to calculate the gain and
offset coefficients. At last, the thin stripping is de-striped with the obtained coefficients.
The final results indicate that the proposed method can effectively remove the stripping noise along the scanning
direction effectively. Comparison of mean value and standard deviations obtained from the strip noise removed image by
the proposed method and histogram equalization method suggested that the proposed method is evidently superior to the
traditional histogram equalization method in preserving the image detail very well. The result of this study is applicable
in striping removal of push-broom satellite's remote sensing data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941M (2011) https://doi.org/10.1117/12.900232
The low light level image intensifier was usually applied in the night observation, and it has been developed and
improved for a long history. While it can also be used in the x ray imaging system for its specialty in photos multiplying
and conversion, so in this paper, the technology development of the low light level image intensifier was described at
first and then a novel x ray image intensifier designed by our research group is introduced. The x ray intensifying screen
was the x ray sensor converting the x ray into the visible light. For the visible light from the x ray image intensifier was
too weak to see the image, the low light level image intensifier was used to intensify the light further. When the low light level image intensifier was selected, the novel x ray image intensifier's performance was modeled, which can given the comparison in resolution and brightness between with and without the low light level image intensifier. In conclusion, the novel x ray imaging system's performance is good enough to be applied to security checking, non-destructive testing, and industry detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941N (2011) https://doi.org/10.1117/12.900246
In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the
introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which
were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the
controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and
the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were
introduced. The results as following: the precision of synchronous photography about forward and nadir and backward
camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of
line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested
in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the
earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a
working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the
temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yan-bin Wang, Jing Duan, Hu-min Jin, Kai Jiang, Heng-jin Zhang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941O (2011) https://doi.org/10.1117/12.900258
Compared with continuous zoom system, switch-zoom system has many advantages, such as simple structure, perfect
imaging quality and easy fixing. A set of long focal length large-aperture visible switch-zoom system is designed in this
paper. The system is composed of two parts: the front R-C object lens and the back switch-zoom system. When the back
system is reversed, 1500~3000mm switch-zoom can be realized. In order to match the front R-C object lens' exit pupil
with the back switch-zoom lens' entrance pupil, the front R-C object lens is controlled to be telecentric and the back
switch-zoom system is telecentric at both image and object. The back switch-zoom system's object NA at short focal
position is controlled to be equal to the front R-C object lens' image NA. Thus, the two parts can match with each other
properly and the whole long focal length large-aperture visible switch-zoom system is designed. When the spatial
frequency is 50lp/mm, the MTF of the R-C system and the back switch-zoom system are reaching to the diffraction limit
to ensure the MTF of the whole system at both long focal position and short focal position. The RMS at each focal
position is less than 10um. From the first surface to image plane, the overall length is 1100mm. The designed result
shows that this zoom system has the advantages of simple structure, high image quality and easy fixing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Li Deng, Wen-jie Liang, Xiu-hong Fan, Tian-xiang Xu, Hui-jiao Yang, Li-gong Chen, De-jun Chen, Yong Liu
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941P (2011) https://doi.org/10.1117/12.900275
We demonstrate a real-time, high-accuracy, non-contacting optoelectronic sensing system for measuring the
steel rope tensile deformation. Tensile deformation of steel rope is detected in real-time using a linear CCD.
For high-accuracy measurement, floating threshold method is used to binarize and distinguish the difference
between the testing objects and the background. By using the linear fitting, 1.4% relative error at tensile
deformation range from 0 to 10 mm is realized. Some improvements for increasing the precision of the measurement are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941Q (2011) https://doi.org/10.1117/12.900283
X-ray photoelectron spectroscopy (XPS) was used to study the microporous surface elements change in content and
chemical states of microchannel plates before and after hydrogen reduction. The results show that the oxygen charge
states include mixture of bridging oxygen (BO), non-bridging oxygen (NBO), and hydrogen oxide (-OH), and BO is the
main charge state, that Si, Pb and Bi are also bonded with F to produce fluoride except bonding with oxygen before
hydrogen reduction. After hydrogen reduction, the binding state of O and Si unchanged. The [BO] and [NBO] decrease,
and [OH] increases obviously. Si fluoride reacts with H2O and produces amounts of ≡Si-O- at high temperature. The lead
exists in mixture of Pb0 and Pb2+, and the Bi mainly exists in bond of Bi0 in surface region of the reduced samples. The signals of K2p and Na1s emerged again in the reduced samples. It is the change in content and chemical states of
microchannel plates after hydrogen reducing processing that the secondary electron emission yield after reduction is 1.5
times higher than that of samples before reduction, and the bulk resistivity obviously drops by 3~4 order.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941R (2011) https://doi.org/10.1117/12.900290
A new real-time low-light-level (lll) image enhancement algorithm which applies to FPGA, is proposed in this paper. In
real-time Lll image processing, the time and space domain noise is effects of real-time Lll image system precision
primary factors, Seems particularly important how about to reduce Lll image noise and improve the precision at the lll
image. In order to reduce lll image noise, we use the time domain recursion and the good results obtained.er
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941S (2011) https://doi.org/10.1117/12.900296
The detection of inside surface of the hollow reactor uses the industrial fiber endoscope mostly. In order to obtain the
large view angle, industrial endoscope generally uses the ultra wide fisheye lenses, which have the larger image
distortion, and the smaller depth focus. Industrial endoscope uses optical look system. It is inconvenience for the
observer. Even if use the TV image transmission system, it is also hard to get high quality images for the limit of like a
bouquet of processing by preaching.
The hollow reactor inside surface rapidly detection system based on the photoelectric detecting technique is composed
by optical detection system, control system, the transmission system, image transmission system, image acquisition,
display and data processing system. It can detect four holes and eight surface at the same time, and the testing time only
50-60 seconds. It can save the real condition of the inside surface of hollow reactor in the form of the bitmap. It can
screen the bitmap stored according to the set of parameters, find out the problem bitmaps and supply for the techniques
to identify and confirm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941T (2011) https://doi.org/10.1117/12.900312
In the 2D non-contacted body measurement, the transform model which converts the human body 2D girth data to the
3D girth data is required. However, the integrate model is hardly to be obtained for the different human body type
categories determine the different model parameter. So, the work of human body type accuracy classification based on
the measure data is very important. The canonical transformation method is used to strengthen the similar of data
features of the same type and broaden the diversity of the data features of the different type. The "accumulating dead
bodies" ant colony algorithm is improved in the paper in the way of employing the road information densities to help the
ant to select the probable path lead to site of the accumulating dead bodies when it moves the data. By the way, the
randomness and blindness of the ants' walking are eliminated, and the speed of the algorithm convergence is improved.
For avoiding the unevenness of the data unit visited times in the algorithm, the access mechanism of the union data is
employed, which avoid the algorithm to get into the local foul trap. The clustering validity function is selected to verify
the clustering result of the human measure data. The experiment results indicate the affectivity and efficiency of the
human body clustering work based on the improved ant colony algorithm. Basing the sorting result, the accuracy 3D
body data transforming model can be founded, which should improve the accuracy of the non-contacted body measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lixun Tian, Ningfang Liao, Ali Chai, Boneng Tan, Deqi Cui, Jiajia Wang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941U (2011) https://doi.org/10.1117/12.900316
The aim of this paper is to pave the way for the establishment of analysis of the lights
reflected from the leaf's surface as an analytical method of plant disease. An imaging LCTF
spectrometer that covers a visible light with 400-720 nm wavelength bands has been developed.
This paper first outlines the structure of imaging LCTF spectrometer, including their operational
principles and construction. Next, various spectral images acquired using the LCTF spectrometer in laboratory environment experiments to measure spectral characteristics of rays reflected from cucumber leaves surfaces that are infected by different germs are analyzed. Then, the results of the experiments conducted using the imaging spectrometer are shown, including the analyzed relative radiance of rays reflected from the plants, and spectral images acquired at various wavelengths. These experimental results demonstrate clearly that rays reflected from plant contaminated by different disease germs have different spectral properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941V (2011) https://doi.org/10.1117/12.900325
It is well known that light signal gets weaker in the imaging process through the imaging chain. To make it possible that
the output image is of enough brightness for the human eye to "see" the information from the final image, we modeled
the light signal passing through the x-ray imaging chain. The imaging chain is composed of several optoelectronic
devices with characteristics, so the model elements mainly include spectral matching and inverse square distance for it is
assumed that the imaging system runs under the fine condition and without noise disturbance. This paper can be valuable
and referential for designing an x-ray imaging system and it is the same for other kinds of imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xiaofeng Bai M.D., Feng Shi, Hanliang Feng, Rong Liu, Lei Yin, Yingping He
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941W (2011) https://doi.org/10.1117/12.900343
Output signal to noise ratio is an important technical index for evaluating detectability of microchannel plate image
intensifier tube, and the characteristic for detecting of microchannel plate image intensifier tube restricts the detectability
of the night vision system. It has been proved in theory and in practice that the value of output signal to noise ratio of
image intensifier tube equipped for night vision system decides the farthest distance and imaging definition of system
which used under low light level in square root way. In this article, method and device for measuring the output signal to
noise ratio of 18mm microchannel plate image intensifier tube has been introduced in detail. Output signal to noise ratio
values of several 18mm microchannel plate image intensifier tube selected have been measured. Contacting to work
condition of image intensifier tube, relationship between voltage of cathode, microchannel plate, screen and output
signal to noise ratio of 18mm microchannel plate image intensifier tube bas been studied, which is available for other
image intensifier tube.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941X (2011) https://doi.org/10.1117/12.900357
Dual-band IR imaging detector can be responsive to two correlated spectrum of the target and the flare. With
the equivalent temperature of the target and the flare, the radiation characteristic of the target and the flare is simulated
by MATLAB which is based on Plank's law of the blackbody radiation, and the SWIR/MWIR spectrum is chose where
the IR radiation is the largest difference between the target and the flare. The theoretical calculation procedure is
designed according to the transform flow of the IR radiation. First we calculate the responsive results of InSb IRFPA to
the cavity blackbody and the extended area blackbody, because of the simulated results are consistent with the tested
results, the simulation procedure is of practical .Then consideration of the HgCdTe 128×128 detector with pixels are
stacked, the performance parameters of HgCdTe SW/MW dual-band IR imaging detector are simulated with MATLAB
procedures, the purpose of distinguish the target from the flare with image and dual-band ratio can be reached, and the
two benefit conclusions can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gui-cai Song, Yan-xiang Na, Qi Zhang, Wen-zong Shi
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941Y (2011) https://doi.org/10.1117/12.900474
Insulating oil is widely used in transformer and other large high-voltage electrical equipment.Its main functions are
insulation, cooling and arc extinction. When the transformer runs, it may emit heat or discharge, which generate gas,
micro water and trace metals in transformer oil. This will not only reduce the insulation capacity of insulating oil,and
will greatly reduce the ability of its extinction, causing the transformers or other oil-filled electrical equipment appearing
Internal latent malfunction, which would affect the operation of equipment.
In this Paper, we simulate the transformer discharge effect to discharge in transformer oil. Then we use spectral theory
and photo-spectroscopy technology to measure and analyse the oil sample, combining with IR absorption peaks of main
fault characteristic gases, and qualitatively analyse CO, CO2, CH4, C2H6, C2H4, C2H2, H2 in gas mixture. The results show that the Fourier transform infrared spectroscopy can be very effective for analysing gases in transformer oil, which
can quickly detect possible problems in the equipment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81941Z (2011) https://doi.org/10.1117/12.900496
Measuring the movement of raster by the method of moiré fringe has the
advantage of high sensitivity, high resolution and non-contacted measurement. The
characteristic of moiré fringe is that the image is white alternate with black, the angle
of the stripes is uniform, the width of the stripes is uniform, the terminators of the
stripes aren't clear. A fast method that can figure out the width and angle of the moiré
fringe precisely is put forward in this paper. It calculates the angle the stripes firstly.
According to the principle of the minimum mean squared error (MMSE), the closer a
series of data is, the smaller the value of the MMSE will be. The method is described
as follows: It takes the image's center as the origin, 180 beelines pass through the
origin with the same angle interval. it calculates the value of the minimum mean
squared error of the 180 beelines and find out the least one among those, then the
angle of the moiré fringe α comes out primarily. In order to improving the calculating
precision of moiré fringe, 60 equal angles are divided in the neighborhood of the
angle α, then a precise angle β of moiré fringe is calculated according to the principle of the MMSE. After getting out the angle of the moiré fringe, we begin to calculate the width of moiré fringe. A line vertical with the moiré fringe is drawn, and we can get the width of the moiré fringe by the vertical line. In order to get over the influence of the noise, an effective area with the shape of diamond is selected in the image. The data of area is accumulated and projected according to the direction of moiré fringe, and a sine curve come out. The width of moiré fringe can be obtained by getting the position of the first wave crest, the position of the last wave crest and the number of wave crest. Experiments prove that the precision of the method put forward in this paper is enhanced in comparison with the traditional frequency method, the precision of width calculation achieves to 99.6% according to the evaluation indicators of width detection error. The computing speed is boosted largely compared with traditional method, and it can achieve with 15 ms, that satisfying the demand of real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819420 (2011) https://doi.org/10.1117/12.900497
An imaging Thomson scattering system has been designed and built to perform the spatial-resolution-oriented plasma
electron temperature and density measurements, which incorporates a second generation image intensifier and an
EMCCD as a detection system. In general, the characteristic of weak scattering of radiation is the most concern in
Thomson scattering systems. Therefore, it's quite essential for the initial system design to avoid further loss of the
amount of radiant power transferred from the source to the detector, and to perform the detection capability verification
based on the designed setup. This paper will focus on three points. Firstly, The key design parameters including
magnification and f number of the collection lens, the diameter and NA of the fiber, the entrance and exit slit area and f
number of the spectrometer are designed interactively to maximize light throughput, with also beam quality taken into
account. Then, the system setup is described and the expected photon number per pulse per scattering length is
calculated. Finally, from the comparison between the measured radiation photons of a standard lamp and the calculated
photons based on the designed condition, with both spatial binning and EM gain performed, the capability of the
detection system is verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819421 (2011) https://doi.org/10.1117/12.900507
In order to eliminate the influence of Smear Effect on follow-up processing of star images, this paper researched
the source and statistical model of Smear Effect. After researching the working progress of inter-line Charge
Coupled Device(CCD), inter-frame CCD and full-frame CCD, this paper builds a statistical model based on kernel density estimation for the background noise and then proposes an algorithm to do radiometric correction in smear images based on modeling and estimating the probability density function of background noise in star image. Experimental results indicate that the algorithm in this paper can remove smear effect in star image efficiently while retaining origin information. The method in this paper can eliminate the influence of smear effect in star images while retaining origin information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819422 (2011) https://doi.org/10.1117/12.900516
This paper focuses on the simulation and experimental testing for the star sensor system about position accuracy with
different spectrums. The simulated model has been presented to analyze the effect of stellar spectrums on star sensor.
Stellar spectrum types of K0, A, F are chosen as simples and simulated with software CODE V. The simulated results
show that the RMS position error of the image spot is 0.24 pixels. An experiment is taken to verify this effect. By setting
different band-pass light filters in the optical path structure, different spectrums of star light are achieved. The image
spot centroids have been obtained at five different angles of view when the simulated star light is in different wavelength
range. The experimental results show that the RMS position error is about 0.18 pixels. It is similar to the theoretical
simulation analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819423 (2011) https://doi.org/10.1117/12.900519
Precise ground target localization is an interesting problem and relevant not only for military but also for civilian
applications, and this is expected to be an emerging field with many potential applications. Ground Target Location
Using Loitering Munitions (LM) requires estimation of aircraft position and attitude to a high degree of accuracy, and
data derived by processing sensor images might be useful for supplementing other navigation sensor information and
increasing the reliability and accuracy of navigation estimates during this flight phase. This paper presents a method for
high accuracy ground target localization using Loitering Munitions (LM) equipped with a video camera sensor. The
proposed method is based on a satellite or aerial image matching technique. In order to acquire the target position of
ground intelligently and rapidly and to improve the localization accuracy estimating the target position jointly with the
systematic LM and camera attitude measurement errors, several techniques have been proposed. Firstly, ground target
geo-location based on tray tracing was used for comparison against our approach. By proposed methods the calculation
from pixel to world coordinates can be done. Then Hough transform was used to image alignment and a median filter
was applied for removing small details which are visible from the sensed image but not visible from the reference image.
Finally, A novel edge detection method and an image matching algorithm based on bifurcation extraction were proposed.
This method did not require accurate knowledge of the aircraft position and attitude and high performance sensors,
therefore it is especially suitable for LM which did not have capability to carry accurate sensors due to their limited play
weight and power resources. The results of simulation experiments and theory analyzing demonstrate that high accuracy
ground target localization is reached with low performance sensors, and achieve timely. The method is used in
reconnaissance and surveillance missions, or applicable in any other environment with a relevantly structured clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819424 (2011) https://doi.org/10.1117/12.900521
Videometrics can measure the displacement, distortion of the objects, and get the movement parameters
by several views. As an optical measurement method, there are several advantages such as the high accurate
results, no need contact with the objects, wide measurement range and so on. In general, the definition criteria of
videometrics is evaluated by the following two ways: the first one is to get the image reprojection error which can
be calculated by comparing the image coordinates of the cooperation targets with the reprojection image
coordinates of the calculated 3D positions of the cooperation targets after camera calibration, the second one is to
get the 3D reprojection error which calculate the space distances of the calculated 3D positions of the cooperation
targets and the projection lines which pass the optical center and the image point of the corresponding targets after
camera calibration. The two methods actually obtain some kinds of calibration errors, the real measurement errors
are not the same. A measurement error evalution method is proposed by using the classical adjustment method.
Firstly, the calibration process of the videometric is deduced from the differential form of the collinear function
which regards the 3D positions of the cooperation targets and its coordinates in the image plane as the known
values and supposes that the mean-squared deviations of them are known. Secondly, according to the classical
adjustment method, the normal function and its weight matrix are known; the parameters and the mean-squared errors of the camera such as focus length, principle point, and etc. can be calculated. Thirdly, According to the values from the second step, and transform the differential function in the first step, the errors of the measurement can be calculated. The evaluation of the videometrics measurement errors is deduced from the three steps above and can be calculated when the cameras are ready to measure the objects and the cooperative targets are known. The simulation and experiment shows the proposed method is reasonable and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yan Tian, Jian-zhong Cao, Da-wei Yao, Zhao-hui Xu, Jing Huang
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819425 (2011) https://doi.org/10.1117/12.900523
This paper researches the digital high definition imaging technology and successfully designs a digital high definition
camera system .The system takes large array CCD(KAI-2093CM) which conforms to SMEPT 274M standard as
photoelectric transfer device, FPGA+AFE as framework, HD-SDI as transforming interface, and combines with the
current advanced digital high definition video standard. The result of imaging shows that the high definition camera can
realize high definition shooting. The pictures are clear and can be displayed with no stagnation in real time. Moreover,
the small camera with high resolution can be applied for high definition shooting in aerospace and other fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819426 (2011) https://doi.org/10.1117/12.900538
This paper presents a method to simultaneously get 3D hand and palmprint information by projecting composite color
fringe patterns. The existing researches mainly focus on 2D biological features, and the extracted features from 2D image
are distorted by pressure or lose the third dimensional information. But 3D features with non-contact operation can
obtain the characteristic distribution patterns without distortion, and simultaneously obtain real hand morphology and the
global properties of hand and palmprint. A prototype 3D imaging system is designed to capture and process the
composite color fringe patterns on the hand surface. The hardware configuration comprises a DLP (digital light
processing) projector, a color CCD camera with fireware port and a personal computer (PC). In order to fast acquire 3D
accurate shape data, sinusoidal and binary fringe patterns are coded into red, green and blue channels to generate
composite color fringe pattern images. The DLP projector projects composite RGB fringe patterns onto the surface of
human hands. From another viewpoint, the CCD camera captures the images and saves them into the computer for postprocessing.
Wrapped phase information can be calculated from the sinusoidal fringe patterns with high precision. While
the absolute fringe order of each sinusoidal fringe pattern is determined by the binary fringe pattern sequences. The
absolute phase map of each pixel can be calculated by combining the obtained wrapped phase and the absolute fringe
order. Some experimental results on human hands show that the proposed method correctly obtains the absolute phase
(shape) data of hand and palmprint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819427 (2011) https://doi.org/10.1117/12.900545
Close-range photogrammetry is a significant method that can detect size, shape and position of objects for its
conveniences and high accuracy. But in some extreme environment, the conventional method is difficult to match the
request of measurement for there are still many measurement work can not complete using traditional method. This paper
has development a new method to measure the section of objects using the single camera measurement model. In order
to achieve the purpose, there are three main parts in this paper. Firstly, two extraction method of laser fringe is presented,
their extraction precision and time is compared via extracting laser fringe from images with different Gauss noise. Steger
method's precision is higher than curve fitting method. But curve fitting method cost less time than Steger method.
Secondly, we have improved the traditional Autobar to adapt the dark measure environment. Considering retro-reflective
targets and common black-white targets can not be recognized easily while without strobe light or lack of illumination,
the retro-reflective material of traditional Autobar is replaced with LED light to be recognized easily in image without
strong flicker when photographing. At last, a simulation experiment is taken to demonstrate the whole measurement
process and validate the new single camera measurement model' feasibility. The final results of simulation experiments
showed that the newly presented measurement model has its feasibility. This measurement model greatly improves the
measurement efficiency and makes the measurement work more flexible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819428 (2011) https://doi.org/10.1117/12.900563
In this paper a new three-dimensional (3D) liquid crystal display (LCD) display mode based on backlight control is
presented to avoid the left and right eye images crosstalk in 3D display. There are two major issues in this new black
frame 3D display mode. One is continuously playing every frame images twice. The other is controlling the backlight
switch periodically. First, this paper explains the cause of the left and right eye images crosstalk, and presents a solution
to avoid this problem. Then, we propose to play the entire frame images twice by repeating each frame image after it was
played instead of playing the left images and the right images frame by frame alternately. Finally, the backlight is
switched periodically instead of turned on all the time. The backlight is turned off while a frame of image is played for
the first time, then turned on during the second time, after that it will be turned off again and run the next period with the
next frame of image start to refresh. Controlling the backlight switch periodically like this is the key to achieve the black
frame 3D display mode. This mode can not only achieve better 3D display effect by avoid the left and right image
crosstalk, but also save the backlight power consumption. Theoretical analysis and experiments show that our method is
reasonable and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819429 (2011) https://doi.org/10.1117/12.900564
Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need
some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but
all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of
known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the
current and the next image. Secondly, some unknown but good features to track were automatically detected in the
current and the next image. Thirdly, those unknown features which were on the rigid and could match each other
in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of
those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D
information in the two images except only two case: the first one was that both camera and object have no relative
motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment;
the second one was that there was no shared scene or no matched feature in the two image. Finally, because those
unknown features at the first time were known now, pose estimation could go on in the followed images in spite
of the missing of known features in the beginning by repeating the process mentioned above. The robustness of
pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale
Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the
different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing
Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose
estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation
methods, this new method can estimate pose between camera and object when part even all known features are
lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used
widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942A (2011) https://doi.org/10.1117/12.900575
A 500mm class SiC flat mirror was polished up to 21nm (RMS value) with CCOS in a very short period. The lightweight mirror structure, the mount method, the choose of the diamond powder, the CCOS procedure is presented in the paper. In addition, the efficiency of the procession is further be analyzed to make the batch production of this kind of hard material mirror be possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942B (2011) https://doi.org/10.1117/12.900595
Coordinate measuring system which based on the theory of binocular stereo vision is widely used in many areas,
whereas their effective measuring ranges are usually not larger than ten meters. In modern times the surveillance of large
scale is used more and more in the civil and military area with the development of camera and computer technology. So
based on this requirement this paper developed a new measuring model for this binocular stereo vision measuring system
which is proper used in outdoor surveillance to get the 3D coordinate of the moving object. When the distance between
two cameras is hundreds meters, the installation and camera calibration are quiet simple and convenient without
expensive calibration apparatus and an elaborate setup or a planar pattern shown at a few different orientations or
complicated camera imaging model and the parameters of math model are easy to get. After building the model of
measuring system error analysis is performed to show influence of every parameter on the measuring system error. Both
computer simulation and real data have been used to test the validity of our new simple measuring system model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lei Yan, Feng Shi, Yaojin Cheng, Hongchang Cheng, Hongli Shi, Zhipeng Hou, Feng Liu
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942C (2011) https://doi.org/10.1117/12.900669
In order to illustrate the performance of intensifier affected by gas on surface, the kovar ring surface has been tested by
XPS. According the result, gas type and quantity are calculated. With the theory of thermal and electron stimulated gas
desorption, the remaining gas on surface after thermal and electron treatment is reckoned and the surface outgassing
rate is evaluated on the assumption that the image intensifier is working in the 10-3lx environment. Then, the performance of image intensifier is evaluated by the effect of gas on the surface at last.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942D (2011) https://doi.org/10.1117/12.900675
In the process of detection and diagnose, it has always been hoped that X-ray could
focus imaging as visible light, which is one of key technologies to improve diagnostic precision of
X-ray. However, visible-light-focusing system, of which the refractive index is very small for
X-ray and the absorption is very strong, could not used to focus X-ray. People can only try another
method. In recent years, scientists have invented a new X-ray refractive optics --- X-ray compound refractive lens (CRL), which is composed of many double-concave lens superimposed in line array. In this article, we show the process of fabrication of CRL, which is made of Aluminum, Magnesium and Silicon. We tried several different process methods and compared them, and then we built a test platform, on which all the CRL were tested. Finally, we obtained some focal spot images, and made a few proposals for the future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lai-an Qin, Zai-hong Hou, Yi Wu, Feng-fu Tan, Feng He
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942E (2011) https://doi.org/10.1117/12.900679
With the wide application of photoelectric equipments in military and civilian areas such as aviation, arms control etc.,
high-speed image acquisition and real-time processing attracted more and more attention. In order to acquire the accurate
position information of beacon in photoelectric tracking system quickly, a embedded high frame-rate CMOS beacon
position system based on davinci technology was presented. This system could detect the beacon and acquire its position
information with high-speed. Using CMOS sensor MT9M001 as detector, this system could capture image and
calculated centroid at different speed. It used centroid algorithm based on double gate object tracking strategy to acquire
the spot centroid and can process image of 600x550 resolution at 130fps. The system was characterized by dynamic
reconfiguration and high efficiency of computation. It could be applied in photoelectric tracking and space laser
communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942F (2011) https://doi.org/10.1117/12.900682
A new structure of pixels of CMOS image sensors is presented in this article. With multiple layers
of metal, it is possible to separate control pins of adjacent pixels. These separated control pins
make it possible to overlap exposure time of these pixels. After recovering information with
temporal difference from the raw data of overlapping exposure, the temporal resolution can be
smaller than the exposure time. This kind of pixels can be used in low-light or high-speed applications where the choices of exposure time is limited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942G (2011) https://doi.org/10.1117/12.900689
Focused on high sensitivity property of the resonant-cavity-enhanced InGaAs/GaAs quantum dots photodetector, the
wide dynamic range readout was demanded and designed. The Capacitive Trans-Impedance Amplifier (CTIA) readout
structure having bias stability and good linearity compared the characteristic of the Self-Integrated (SI) readout structures
is more suitable for the quantum dots photodetector. However, the CTIA structure needs to expand readout dynamic
range for effective photoelectric conversion signal output of the novel photodetector. Through the different integration
capacitor readout experimental comparison and analysis, a readout structure whose low noise amplification gain could be
automatically adjusted was designed, the output dynamic range extended to over 90dB, and the signal to noise and
sensitivity of the output signal have been significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942H (2011) https://doi.org/10.1117/12.900691
Compressive Sensing (CS) is a new sampling framework that provides an alternative to the well-known Shannon
sampling theory. The basic idea of CS theory is that a signal or image, unknown but supposed to be sparse or
compressible in some basis, can be subjected to fewer measurements than the nominal number of pixels, and yet be
accurately reconstructed. By designing optical sensors to measure inner products between the scene and a set of test
functions according to CS theory, we can use sophisticated computational methods to infer critical scene structure and
content for significantly economizing the resources in data acquisition store and transmit. In this paper, we investigate
how CS can provide new insights into optical imaging including optical devices. We first give a brief overview of the CS
theory and reviews associated fast numerical reconstruction algorithms. Next, this paper explores the potential of several
different physically realizable optical systems based on CS principles. In the end, we briefly discuss possible implication
in the areas of data compression and optical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942I (2011) https://doi.org/10.1117/12.900703
What we believe to be a method for recording speckle interference fringe with electrically addressed liquid crystal
display (EALCD) and CCD is presented. The speckle patterns are obtained by image plane speckle method. Two
patterns of speckle interference field before and after displacement are recorded by CCD camera and stored in computer.
After subtracting and taking the absolute value, the correlative fringe pattern including object's displacement information
is acquired. In this paper, methods that subtract optical noise are analyzed to enhance the signal-to-noise ratio of
secondary speckle fringes using double-exposure measurements on rough surfaces. The speckle pattern is processed in
spatial domain and frequency domain and the obvious results are given. Compared with the results, synthesized filter is
proposed in the paper. The contrast of interference pattern is increased and high SNR of the speckle fringe pattern is
realized. Experimental pictures are provided in the article. The experimental result shows that the application of the
method, synthesized filtering of spatial domain with frequency domain, can be used to improve the fringe contrast. The
results of the experimental studies proved that the synthesized filtering is the most effective way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shulin Liu, Yufeng Zhu, Feng Shi, Jing Nie, Taimin Zhang, Xiaojian Liu, Ni Zhang, Zhaolu Liu, Yingping He, et al.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942J (2011) https://doi.org/10.1117/12.900714
To make sure that high performance Microchannel Plate (HP-MCP) can be applied successfully in Gen III Image Intensifier, vacuum baking test and trying out test are carried out first, then making ion barrier film onto input surface of HP-MCP, employing strict electron scrubbing test, and measuring resistance, gain in each stage, finally Gen III Image intensifier is manufactured with HP-MCP. As a result, many data concerning with HP-MCP are gained and the conclusions indicate: the HP-MCP can be applied in Gen III Image Intensifier, and can manufacture qualified product; thinking over batch production in the future, studying on the MCP' material and technique especially in compatibility between HP MCP and Gen III image intensifier is enhanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942K (2011) https://doi.org/10.1117/12.900721
Feature points and object edges are two kinds of primitives which are frequently used in target tracking algorithms.
Feature points can be easily localized in an image. Their correspondences between images can be detected accurately.
They can adapt to wide baseline transformations. However, feature points are not so stable that they are fragile to
changes in illumination and viewpoint. On the contrary, object edges are stable under a very wide range of illumination
and viewpoint changes. Unfortunately, edge-based algorithms often fail in the presence of highly textured targets and
clutter which produce too many irrelevant edges. We found that both edge-based and point-based tracking have failure
modes which are complementary. Based on this analysis, we propose a novel tracking algorithm which fuses point and
edge features. Our tracking algorithm uses feature points matching to track object first, and then uses the transformation
parameters archived in the first step to initialize the edge tracking. By this means, our algorithm alleviates the
disturbance of irrelevant edges. Then, we use the texture boundary detection algorithm to find the precise object
boundary. Texture boundary detection is different from the conventional gradient-based edge detection which can
directly compute the most probable location of a texture boundary on the search line. Therefore, it is very fast and can be
incorporated into a real-time tracking algorithm. Experimental results show that our tracking algorithm has outstanding
tracking accuracy and robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942L (2011) https://doi.org/10.1117/12.900729
The quantum efficiency equations of two kinds of reflcetion-mode GaAs photocathodes (GaAs-GaAs and
AlGaAs-GaAs) with back interface recombination velocity have been solved from the diffusion equations. According to
these quantum efficiency equations, the integral sensitivities as a function of active layer thickness, electron diffusion
length and back interface recombination velocity for both kinds of cathodes are simulated. Through the theoretical
simulation, we found the active layer thickness for AlGaAs-GaAs cathodes has an optimum value at which the cathodes
achieve the maximum sensitivity. Under most conditions, the theoretical integral sensitivities of AlGaAs-GaAs cathodes
are greater than that of GaAs-GaAs cathodes. This is attributed to that AlGaAs-GaAs interface barrier reflects most
photoelectrons back into the active layer. The theoretical spectral response of both kinds of cathodes is also simulated.
We found that the increase in integral sensitivity of AlGaAs-GaAs cathodes mainly reflects in the increase of spectral
response of long wavelength photons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942M (2011) https://doi.org/10.1117/12.900743
In this paper, a novel x-ray imaging system was introduced. It was a CCD based system, but different from the traditional
CCD based x-ray imaging system, which was composed of the x-ray intensifying screen, the CCD and the low light level
image intensifier, specially using the zoom lens for coupling. Zoom lens can give a continuous variable visual field,
which not only reduce the geometrical blur but also can produce several image pairs for stereo imaging. It is convenient
for three dimension information extraction from a group of two dimension x-ray images and is valuable for stereovision
radiography in the application of medical diagnosis, security checking, non-destructive testing, and industry detection.
This stereo imaging method is also referential for the three dimension reconstruction daily living.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xinyang Wang, Jan Bogaerts, Werner Ogiers, Gerd Beeckman, Guy Meynants
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942N (2011) https://doi.org/10.1117/12.900781
In this paper, we address the issues of designing a CMOS image sensor for space applications. The
performance of a 4T pinned photodiode pixel under irradiation is shown and an example of a CMOS image
sensor designed for sun tracking is given. It has been shown that the radiation tolerance level of the pixel is
improved by using more advanced pixel architecture and more advanced fabrication process. Special measures are required in the sensor design to increases the sensor immunity on single event upset and
latch-up.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942O (2011) https://doi.org/10.1117/12.900801
In order to unite the merits of ICCD (intensified charge coupled device) and EMCCD (electron multiplying charge
coupled device) for detection of the weak signal in the high resolution Thomson scattering system from strong radiation
background, a second generation image intensifier, lens coupled with an EMCCD are used together as a detector. The
signal photon flux is so low in the actual measurement situation that the gain of the I.I., on-chip multiplication gain of
EMCCD and on-chip binning scheme might all need to be utilized to enhance the detection capability or to set lower
demands for other devices. At the same time, however, these amplification processes bring unexpected noise in addition
to the detector noise itself, which will further degrade the signal to noise ratio (SNR). This paper will focus on three
points. Firstly, the three gain methods, including MCP gain, EM gain and binning are theoretically described. Secondly,
the amount of increase in signal counts based on this detector combination is experimentally investigated at various gain
settings, as well as the total noise. Finally, a gain selection disciplines aiming to obtain an optimum SNR is generalized
according to the comparison between test results and theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942P (2011) https://doi.org/10.1117/12.900806
This paper proposed a sub-pixel image correlation algorithm that can get more Precise result,
its principle is apply the distribute of relativity peak to get weighted multi-pixel comprehensive of
location. Image correlation be as to calculates the greyscale relativity of image template and matching
image, the relativity of correspond location where match best with template will be most high, and in its neighbour range, the relativity will be still keep high too. We used these pixel in this local area of calculated match point to get sub-pixel accuracy, the relativity of every pixel be used as its weight for participate the sub-pixel calculation. The sub-pixel location is more accuracy than the integer one, we applied this method to perform background compensation in processing the target detecting for video image sequence. At the end of this paper, some experiment data be proposed, it proved this sub-pixel image correlation can obtain better result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yongjie Wang, Zengqian Yin, Mingqiang Huang, Jingyu Wan
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942Q (2011) https://doi.org/10.1117/12.900815
A new method is presented for measuring the electric parameters in dielectric barrier discharge (DBD), and the effect of
barrier on discharge is investigated. Results show that the number of discharge current is variable in different half period
of applied voltage, and the current pulse width is in the range of 160 to 280ns. The discharge power increases
monotonously with the applied voltage, and the maximum power is 22.62w, which corresponding power density of
5.76w/cm3. The electric field in the gas gap decreases monotonously with the increase of gas gap, and the optimum work
condition is proposed in DBD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942R (2011) https://doi.org/10.1117/12.900859
A novel measuring system, named Theodolite-camera Videometrics System (TVS) based on total station, has been
introduced in this paper, and the concept of theodolite-camera which is the key component of TVS has been proposed, it
consists of non-metric camera and rotation platform generally, and can rotate horizontally and vertically. TVS based on
total station is free of field control points, and the fields of view of its theodolite-cameras are nonfixed, thus TVS is
qualified for targets with wide moving range or big structure. Theodolite-camera model has been analyzed and presented
in detail in this paper. The calibration strategy adopted has been demonstrated to be accurate and feasible by both
simulated and real data, and TVS has also been proved to be a valid, reliable, precise measuring system, and living up to
expectations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942S (2011) https://doi.org/10.1117/12.900863
A novel camera calibration method based on circular ring is proposed in this paper. It has been proven that the first
two columns of calculated point transfer homography mapping from the circular ring plane to the image plane have one
isometric ambiguity. But in that case the restriction of homography on the IAC (the image of absolute conic) is still
tenable, so the restriction could be applied to the calibration of the internal camera parameters. The ambiguity of the first
two columns of homography directly results in the isometric ambiguity of the rotation matrix which can be explained in
geometry as the isotropy of circular ring. But the third column of homography has no ambiguity, so the unique of which
could not lead to the ambiguity of the translation vector. The external camera parameters can be calibrated using circular
ring while there is a discrepancy of isometric transformation of rotation matrix within the model plane, which is most
distinguished from the principle of the other plane-based calibration method such as points, lines and multiple conics.
The proposed method has two distinctly superiority over the calibration based on coplanar points or lines: Better noise
immunity because of the global property of the circular ring feature, and automatic calibration because the image
matching of the circular ring feature is much easier compared with the one of points or lines. Both simulation and real
data are used to prove the correctness, high accuracy and robustness of our calibration method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942T (2011) https://doi.org/10.1117/12.900864
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map
generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass
mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method
is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute
orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest
consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal
subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random
sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm
presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate
systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both
simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better
noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random
sample and the data normalization added to the absolute orientation. This method is applicable for high precision
measurement of three-dimensional surface and also for the 3-D terrain mosaic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942U (2011) https://doi.org/10.1117/12.900919
We review methods for reducing TDM crosstalk in an inline FBG based Fabry-Perot sensor. Based on documents and
materials read recent years and current work, causes of TDM crosstalk and the characteristics of suppression methods
relative to the causes are analyzed. It is present in detail that the method of reducing TDM crosstalk using layer peeling
algorithm from respects of key technology, principles of the method, as well as its development and applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942V (2011) https://doi.org/10.1117/12.900921
Ultraviolet technology of detecting is playing a more and more important role in the field of civil application, especially
in the corona discharge detection, in modern society. Now the UV imaging detector is one of the most important
equipments in power equipment flaws detection. And the modern head-mounted displays (HMDs) have shown the
applications in the fields of military, industry production, medical treatment, entertainment, 3D visualization, education
and training. We applied the system of head-mounted displays to the UV image detection, and a novel type of
head-mounted displays is presented: the solar-blind UV head-mounted displays. And the structure is given. By the
solar-blind UV head-mounted displays, a real-time, isometric and visible image of the corona discharge is correctly
displayed upon the background scene where it exists. The user will see the visible image of the corona discharge on the
real scene rather than on a small screen. Then the user can easily find out the power equipment flaws and repair them.
Compared with the traditional UV imaging detector, the introducing of the HMDs simplifies the structure of the whole
system. The original visible spectrum optical system is replaced by the eye in the solar-blind UV head-mounted displays.
And the optical image fusion technology would be used rather than the digital image fusion system which is necessary in
traditional UV imaging detector. That means the visible spectrum optical system and digital image fusion system are not
necessary. This makes the whole system cheaper than the traditional UV imaging detector. Another advantage of the
solar-blind UV head-mounted displays is that the two hands of user will be free. So while observing the corona discharge
the user can do some things about it. Therefore the solar-blind UV head-mounted displays can make the corona discharge
expose itself to the user in a better way, and it will play an important role in corona detection in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942W (2011) https://doi.org/10.1117/12.900932
The Signal-to-Noise Ratio (SNR) is an important quantitative parameter for evaluating the capability of spectrometers.
The noises of CMOS image sensor, stray light and radiometric distortion play important roles in the spectrometer's SNR
performance. An Offner imaging spectrometer is designed and tested. By measuring the spectrometer's spectral response,
its SNR is calculated by the traditional statistical method and the wavelet analysis. Both methods give similar result and
can provide useful information during the spectrometer commissioning as well as performance evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942X (2011) https://doi.org/10.1117/12.900956
The weak-light characteristics of the GaAs/InGaAs resonant-cavity-enhanced (RCE) quantum dot photoelectric sensor
with the resonant coupling nature are presented. In order to explore its higher sensitive application because of higher
quantum efficiency, a readout integrated circuit (ROIC) of the capacitor feedback transimpendance amplifier (CTIA) was
designed to deal with voltage response of novel sensor. The readout circuit integration was designed to match 2x8 the array.
A computer-aided system based on the stm32 microcontroller device for obtaining the readout parameters of novel
photoelectric sensor was also developed. A 633nm laser beam shot to the window of sensor with radiation intensity 7nW, the
readout response voltage was over 200mV and 7.14E +07V /W responsivity at 120K and 15.8μs integration time. Integrated
with high reliability and precision of the software and hardware, the system could be applied to two-dimensional gray-scale
display at real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942Y (2011) https://doi.org/10.1117/12.900961
Non-touch measurement is an important technology in many domains such as the monitoring of tool breakage and tool
wear, et al. Based on the method of curve fitting and demanding inflection point, we present a high accuracy non-touch
diameter measurement system. The measurement system comprise linear array CCD, CCD driving circuit, power supply,
workseat, light source, data acquisition card and so on. The picture element of the linear array CCD is 2048, and the size
of every pixel and the spacing of adjacent pixels have the same size of 14μmx14μm. The stabilized voltage supply has a
constant voltage output of 3V. The light is generated by a halogen tungsten lamp, which does not represent any risk to
the health of the whole system. The data acquisition card converts the analog signal to digital signal with the accuracy of
12 bit. The error of non-uniform of the CCD pixels in sensitivity and the electrical noise error are indicated in detail. The
measurement system has a simple structure, high measuring precision, and can be carried out automatically. Experiment
proves that the diameter measurement of the system is within the range of Φ0.5~Φ10mm, and the total measuring
unstability of the system is within the range of ± 1.4μm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81942Z (2011) https://doi.org/10.1117/12.900975
A low noise and relatively high dynamic range CMOS active pixel sensor (APS) using a variable-gain column amplifier
is presented and analyzed. On this signal path there are a pixel source follower, a switched-capacitor, noise-cancelling,
variable-gain amplifier, and a correlated double sample (CDS) circuit in each column. The using of high gain for the
column amplifier reduces input-referred random noise, but it may reduce the dynamic range of this device at meanwhile.
In this paper, we present a detail analysis for the noise and the dynamic range with the variable gain of the column
amplifier. It is revealed that the total random read noise can be analyzed in three parts: the first part is from the pixel
circuit, including the pixel-related fixed-pattern noise, reset noise and pixel source follower amplifier noise; the second
part is from the column circuit, including the column-related fixed-pattern noise and the column amplifier noise; and the third part is from the output amplifier in the chip-level circuit. The analysis suggests that the noise components from the pixel and column can be significantly cancelled by the double-stage column noise canceller, and the noise components from the output amplifier in the chip-level circuit, are the major noise source and can be greatly reduced if the signal is amplified before this noise is added. Both the analysis and measured result indicate that we can achieve a low input-referred noise and keep a relatively high dynamic gain by choosing a proper column amplifier gain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819430 (2011) https://doi.org/10.1117/12.900996
In this paper, the fundamental of the TDICCD mapping camera is introduced, and the influence of the satellite buffeting on the image quality of the TDICCD camera is analyzed. In order to reduce the influence, a regulated resolution is put forward. Compared with the traditional TDICCD mapping camera, a special TDICCD focal plane which several TDICCD devices splited joint end to end is designed. A great deal of information are captured through the focal plane, and a mathematical model is established to analyze the data information. Then the results are feed back to the satellite, and the attitude of the satellite is actively regulated in real time. Finally, make experiments and simulation to validate it. The experiment result indicate that the design is valid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819431 (2011) https://doi.org/10.1117/12.900998
Traditional in-orbit imaging performance calibration of optical imaging satellite is always based on the sun or
ground artificial target. As imaging system performance advancing, the calibration procedure could base on
fainter target such as stars instead. The practical step of star-based satellite performance calibration procedure
is to select proper stars according to their magnitude and spectral character, center them in image by image
processing, and transform the image to achieve imaging MTF. Nowadays space-based camera with large
aperture could transfer photoelectrons continuously to obtain enough long exposure duration, so faint stars
can also be detected. But the center of star would always tilt from the center of detector pixels which would
be very hard to align. With Discrete Fourier Transformation (DFT) and Point Source Function (PSF)
correlation, the align accuracy could be improved observably. Finally, as a complex imaging system, the final
image MTF would also includes defocus effect and satellite platform jitter etc, which the defocus is of circular
shape and temporal independentCbut the jitter is of linear type in short exposure and the direction and length
may be a function of time. So based on a time serial images and the iteration algorithm, the jitter characteristic
may be separated. Simulation calibration procedure mentioned above indicates satisfying consequence. So
those calibration outcomes could also be used to determine the defocus of the focal plane in camera and the
high frequency jitter of the satellite platform during exposure duration while imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819432 (2011) https://doi.org/10.1117/12.901001
In this paper,compared with the traditional CCD camera imaging circuits, a kind of integrated
solution for CCD imaging circuits is put forward.The principle of the traditional CCD camera imaging
circuits is described briefly, and the foundational functions are introduced. CCD imaging circuits are the
most important parts of the CCD camera, and they are mainly made up of CCD driver circuits and CCD
signal processing circuits. The CCD signal processing circuits mainly consist of timing generator,
preamplifier circuits, CDS circuits, low-pass filter circuits, PGA circuits, ADC circuits, storage circuits,
output interface circuits, and so on. The popular solution is that all the circuits are made with separate
components.Complex circuit configuration, difficult debugging, uptight power dissipation are evident.
However, it goes without saying that the integrated solutions which combine ADC with FPGA device are
high integration, simple configuration and better agility. Finally, the integrated solution for CCD imaging
circuits is illustrated, and the problems of the circuits are analyzed and summarized in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819433 (2011) https://doi.org/10.1117/12.901005
This paper presents a high speed CMOS image sensor (CIS) with column-parallel single capacitor correlated double
samplings (CDSs), programmable gain amplifiers (PGAs) and single-slope analog-to-digital converters (ADCs). The
single capacitor CDS circuit has only one capacitor so that the area CDS circuit is small. In order to attain appropriate
image contrast under different light conditions, the signal range can be adjusted by PGA. Single-slope ADC has smaller
chip area than others ADCs and is suitable for column-parallel CIS architectures. A prototype sensor of 256x256 pixels
was realized in a 0.13μm 1P3M CIS process. Its pixel circuit is 4T active pixel sensor (APS) and pixel size is 10x10μm2. Total chip area is 4x4mm2. The prototype achieves the full frame rate in excess of 250 frames per second, the sensitivity of 10.7V/lx•s, the conversion gain of 55.6μV/e and the column-to- column fixed-pattern noise (FPN) 0.41%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819434 (2011) https://doi.org/10.1117/12.901006
This paper introduces a novel approach for post-processing of depth map which enhances the depth map resolution in
order to achieve visually pleasing 3D models from a new monocular 2D/3D imaging system consists of a Photonic mixer
device (PMD) range camera and a standard color camera. The proposed method adopts the revolutionary inversion
theory framework called Compressive Sensing (CS). The depth map of low resolution is considered as the result of
applying blurring and down-sampling techniques to that of high-resolution. Based on the underlying assumption that the
high-resolution depth map is compressible in frequency domain and recent theoretical work on CS, the high-resolution
version can be estimated and furthermore reconstructed via solving non-linear optimization problem. And therefore the
improved depth map reconstruction provides a useful help to build an improved 3D model of a scene. The experimental
results on the real data are presented. In the meanwhile the proposed scheme opens new possibilities to apply CS to a
multitude of potential applications on various multimodal data analysis and processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819435 (2011) https://doi.org/10.1117/12.901016
In this paper the image lag effect in large size 4T pixel for high speed image sensor is simulated and optimized. The
image lag is mainly caused by the potential barrier and pocket near the transfer gate edge in large size 4T pixel. The
simulation is based on 0.13μm CMOS process. The dependence of the potential barrier and the potential pocket on
design and process parameters is studied. We optimize the parameters, such as offset length between P+ layer and N
layer, N layer doping energy in pinned photodiode (PPD) and TGVT layer doping dose. The simulation results show that
minimum image lag can be obtained at an offset length between P+ and N of 0.3μm, an N layer doping energy of
200KeV and N layer doping dose of 3.5x1012cm-2. The optimizing design effectively improves the charge transfer characteristics of large size 4T pixel and the performance of high speed CMOS image sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819436 (2011) https://doi.org/10.1117/12.901021
The atmospheric transmission characteristics of ultraviolet radiation are unique, which have made ultraviolet detecting
technology has been developed rapidly in military application domain. Missile approaching warning system based UV
detecting technology distinguish target by detecting the ultraviolet radiation in solar blind spectrum from missile's
plumes. This warning system is a passive warning system, which has advantages of good concealment, low false-alarm
rate, cooling without low temperature, small size and light weight. It has been an important part of modern battlefield in
the electro-optical countermeasure domain This paper expatiate the principle of ultraviolet detecting technology and
introduce the developments about ultraviolet warning and related technology, including the ultraviolet transmission in the
atmosphere, the ultraviolet radiance characteristic of target and background, the ultraviolet photoelectric detectors, and
the ultraviolet detection technology. A suit of ultraviolet photoelectric imaging system has been designed. This system is
made up of three parts, there are narrowband ultraviolet optical system in large field, ultraviolet ICCD camera and
ultraviolet warming image processing device. A great deal of imaging experiments about ultraviolet target in several
kinds of typical meteorological condition have been done, these images shows the system is efficient and reasonable. The
characteristics of ultraviolet objective imaging be tested and evaluated by analyze the experiment images obtained by the
system. This system provide theoretical and test bases for engineering research on ultraviolet missile approaching
warning system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819437 (2011) https://doi.org/10.1117/12.901026
We report growth of high Aluminum content and heavy P type doping AlGaAs by molecular beam epitaxy (MBE) for
extended blue photocathode window layer. The key factors which affect of extended blue photocathode window AlGaAs
layer during epitaxy growth were analyzed and show that growth conditions such as V/III flux ratio, substrate
temperature and growth rate have dramatically effected on the AlGaAs layer crystalline quality and morphology. On the
basis of the optimized V/III flux ratio and appropriate growth rate, the substrate temperature for sample growth was
adjusted, the P type heavy doping(≥5×1018cm-3) and large area AlGaAs single crystal material with excellent crystalline
quality and good luminescence properties was fabricated on GaAs (100) substrate. The morphology of the samples was
checked by high resolution optical microscopy. The crystalline quality of samples was measured by X-ray diffraction and
luminescence property was measured by integral luminescence system. The relationship of the crystalline quality and
substrate temperature was got. The excellent crystalline quality AlGaAs layer obtained have been applied to GENIII
photocathode windows layer and spectral response range of photocathode extended to blue-green light in short wave.
The Quantum efficiency in the blue-green wave range of GENIII photocathode is enhanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 819438 (2011) https://doi.org/10.1117/12.901047
In this paper, we present a novel method for visibility enhancement based on
atmospheric scattering imaging models. Given only a single degraded image, we firstly estimate global
atmospheric light vector based on dark channel prior. Then fast bilateral filter is used to deduce
atmospheric veil, which is the key contribution of this paper. Following these, the ideal scene radiance
could be recovered by directly solving physics-based imaging equation finally. The main advantage of
our weather removal algorithm is that, it does not require any a priori scene structure, distributions of
scene reflectance, or detailed knowledge about the particular weather condition, and could achieve
similar or better restoration results with only a fraction of time consumption in contrast to state-of-art
techniques both for color and grey images. Experiments results demonstrate that out algorithm could
significantly enhance the details of hazy images, which is very important for features extraction and
robust tracking for out-door vision system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bin Ren, Feng Shi, Hong-chang Cheng, Hui Liu, Liu Feng, Liang-dong Zhang, Zhuang Miao
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943A (2011) https://doi.org/10.1117/12.901105
In this paper, we present a novel method for visibility enhancement based on atmospheric scattering imaging models. Given only a single degraded image, we firstly estimate global
atmospheric light vector based on dark channel prior. Then fast bilateral filter is used to deduce
atmospheric veil, which is the key contribution of this paper. Following these, the ideal scene radiance
could be recovered by directly solving physics-based imaging equation finally. The main advantage of our weather removal algorithm is that, it does not require any a priori scene structure, distributions of scene reflectance, or detailed knowledge about the particular weather condition, and could achieve similar or better restoration results with only a fraction of time consumption in contrast to state-of-art techniques both for color and grey images. Experiments results demonstrate that out algorithm could significantly enhance the details of hazy images, which is very important for features extraction and robust tracking for out-door vision system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943B (2011) https://doi.org/10.1117/12.901599
Many factors affect space remote sensor imaging, causing image degradation of contrast and resolution
decreasing, which cannot be solved neither by improving resolution of imaging components nor processing of
images. In order to meet the imaging requirement of space remote sensor, image stabilization system should
be included. In this paper, with a combining method of micro-mechanical and digital image stabilization, an
image stabilization system based on DaVinci technology is designed, including imaging and sensing unit,
operating and controlling unit and fast steering mirror unit, using TI TMS320DM6446 as the main processor
of the image stabilization system, which performs the function of focal plane controlling, image acquisition,
motion vector estimating, digital image stabilization operating, fast steering mirror controlling and image
outputting. The workflow is as followings: first, through optical system, ground scene is imaged by imaging
focal planes. Short exposure images acquired by imaging focal plane are transferred as series to the unit of
computing and controlling. Then, inter-frame motion vector is computed from images according to gray
projection algorithm, and employed as inputs with image series to do iterative back projection. In this way the
final picture is obtained. Meanwhile, the control value obtained from the inter-frame motion vector is sent to
the fast steering mirror unit, making compensation to damp vibrations. The results of experiments
demonstrate that the image stabilization system improves the imaging performance of space remote sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943C (2011) https://doi.org/10.1117/12.901631
Under the condition of photon limited, a photoelectric detection system with the capability of photon imaging has to be
used when a faint target is needed to be imaged. Therefore, a photon imaging detection system (PIDS) has been
constructed. When the system works, however, a part of noise is enhanced by the photon imaging head definitely while
the signal is amplified. In this paper, two kinds of mathematical statistics methods are employed to analyze the
comprehensive characteristics of noise in PIDS. One is probability distribution curve fitting and the other is Chi-square
distribution hypothesis testing method. By applying these two methods, the analysis results of experiment data
demonstrate that the comprehensive noise of PIDS subjects to the Poisson distribution, which is consistent with the
theoretical analysis conclusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bao-jun Duan, Dong-wei Hei, Gu-zhou Song, Ji-ming Ma, Zhan-hong Zhang, Chang-cai Han, Yan Song, Ming Zhou, Lan Lei
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943D (2011) https://doi.org/10.1117/12.901769
In this paper, based on the mechanism of CCD camera, an event-based simulation method of
γ-ray-induced transient noise was developed, which applying MCNP5 Code based on Monte Carlo
methods. On the mono-energy γ-ray sources, the transient noises of two different CCD cameras have
been measured at different doses (from 0.001mR to 1mR) and different incidence angles. The 60Co and
137Cs were used as the mono-energy γ-ray sources, and the Compton Scattering technique were adopted
to get lower energy γ-ray sources. In order to get the pure transient noise induced by γ-ray, a method
was advanced for extracting the transient noise from the image mixed the background noise. And the
transient noise was characterized, including the number of noise clusters, the noise intensity spectrum
and the size spectrum of noise clusters. The variation characteristics of noise have been draw from the
simulation and experiment, which induced by γ-rays of different dose, or different energy or different incidence angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dongyan Zhang, Qinhong Liao, Linsheng Huang, Jinling Zhao, Shizhou Du, Zhihong Ma
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943E (2011) https://doi.org/10.1117/12.902427
Ground-based hyperspectral imaging has a unique advantage in analyzing the component information of field crop due to
its characteristics of combining image with spectrum. However, how to fully utilize its data advantages need to be
studied specifically. This paper collected the spectral reflectance of corn leaves using the Pushbroom Imaging
Spectrometer (PIS) in different growth stages. Then, the red edge position (REP) were identified through six algorithms:
first derivative reflectance (FDR), polynomial function fitting (POLY), four points inserting (FPI), line extrapolate
method(LEM), inverted gauss (IG), Lagrange interpolation (LAGR); and the correlation between REP and chlorophyll
content was explored on the basis of studying the red edge amplitude changes. The results showed that: 1) The REP
obtained by different algorithms changed between 690 nm and 740 nm in which the amplitude changes of red edge for
the FDR, POLY and LAGR were maximum and varied from 692 nm to 730nm; the amplitude changes of the FPI and
LEM varied from 713 nm to 740nm; while the IG algorithm was the narrowest and varied only between 702 nm and 710
nm. 2) Considering the relationship between REP and chlorophyll concentration under different conditions (i.e. growth
stages, species, fertilization and leaf positions), the FDR and LAGR performed well in maize under different conditions;
the IG was suitable for different growth stages; the FPI had a good effect in distinguishing different varieties; the POLY
was suitable for different fertilization; the LEM had wider changes for red edge amplitude and a significant correlation
with chlorophyll content, but the correlation coefficient was smaller than other algorithms and this phenomenon needed
to be further studied. The above research results provided some references for quantitatively retrieving crop nutrients
using ground-based hyperspectral imaging data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943F (2011) https://doi.org/10.1117/12.902689
Joule-Thomson coolers have been widely used in infrared detectors with respect to compact, light and low cost.
For self-regulating Joule-Thomson cooler, its performance is required to be improved with the development of higher
mass and larger diameter of focal plane infrared detectors. Self-regulating Joule-Thomson coolers use a limited supply of
high pressure gas to support the cooling of infrared detectors. In order to develop Joule-Thomson coolers with a given
volume of stored gas, it is important to study on fluid flow and heat transfer of Joule-Thomson coolers coupled with
infrared detectors, especially the starting time of Joule-Thomson coolers.
A serial of experiments of Joule-Thomson coolers coupled with 128×128 focal plane infrared detectors have been
carried out. The exchanger of coolers are made of a d=0.5mm capillary finned with a copper wire. The coolers are
self-regulated by bellows and the diameters are about 8mm. Nitrogen is used as working gas. The effect of pressure of
working gas has been studied. The relation between starting time and pressure of working gas is proved to fit exponential
decay. Error analysis has also been carried.
It is crucial to study the performance of Joule-Thomson coolers coupled with infrared detectors. Deeper research
on Joule-Thomson coolers will be carried on to improve the Joule-Thomson coolers for infrared detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943G (2011) https://doi.org/10.1117/12.903180
Traditional phase shifting shape measurement techniques not suitable for dynamic measurement because it need a certain
number of pieces of fringe pattern. In this paper, the basic principle of the fringe projection shape measurement
technique was introduced, the method of single fringe pattern phase extraction by Hilbert transform was discussed, and
conducted three-dimensional shape measurement tests. Experimental results show that 3-D shape measurement can be
achieved by Hilbert transform phase-shifting, and it suitable for dynamic measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943H (2011) https://doi.org/10.1117/12.903678
The star sensor simulation system is used to test the star sensor performance on the ground, which is designed for star
identification and spacecraft attitude determination of the spacecraft. The computer star scene based on the astronomical
star chat is generated for hardware-in-the-loop simulation of the star sensor simulation system using by OpenGL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tao Liu, Ju-feng Zhao, Hua-jun Feng, Zhi-hai Xu, Hui-fang Chen
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943L (2011) https://doi.org/10.1117/12.900918
Dark frame is mixture of fixed pattern noise (FPN), multiplicative Gaussian noise and signal-independent noise, which
appear in exposed image at the same time. Due to the increase of the operate temperature inside imaging system and the
circuit parameters' minor drifts, FPN of each pixel varies from frame to frame slowly and non-uniformly. In this paper,
the dark frame is modeled and then the equations of Kalman-filter is deduced to estimate the FPN level. We introduce
the noise influence factor (NIF) to evaluate the influence of FPN noise on each pixel. The reasonable weight for each
pixel can set adaptively by means of NIF. Denoised image can be got after weighted subtraction dark frame from the
image data on pixels one by one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chang-Jiu Yang, Shuang Li, Zhen-Wei Qiu, Jin Hong, Yan-Li Qiao
Proceedings Volume International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Imaging Detectors and Applications, 81943M (2011) https://doi.org/10.1117/12.900336
Detection technology of polarization imaging, which can detect the polarization state of light emitted or reflected from
targets, has potential applications in space target detection and remote sensing. There are two methods for polarization
state detection: time-sequential polarization detection and simultaneous polarization detection. In contrast with the
well-studied time-sequential acquisition systems, the Simultaneous Imaging Polarization (SIP, one kind of simultaneous
polarization detection) is a novel type of polarization imaging technology which is attracting more and more research
interest. Since the polarization intensity images of 0, 45, 90 and 135 degrees from the same target can be simultaneously
obtained on one detector, false polarization effects being introduced by targets' rapidly motion is avoided, which is
difficult in time-sequential polarization detection system.
There are seven major contributions in this paper. First, we gave a briefly introduction about the detection
technology of imaging polarization. Second, the polarization detection principle of SIP detection system was presented.
In the third part, the design of SIP experimental setup was showed. In the fourth part, we introduced image registration of
the instrument in derail. The calibration of the instrument was shown in part five. Specifically, the depolarization of the
instrument was calibrated by integrating sphere and the precision of polarization detection was tested by Variable
Polarization Light Source (VPLS). The sixth part showed the polarization measurement results and their discussions.
Finally, the conclusions and further improvement of this system were presented in the seventh part.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.