Camera and system parameters calibration is an essential process for a distance measuring system using
binocular stereo vision. In usual ways of calibration, brightly colored objects are put at the center of the area of
surveillance to avoid big errors in measurement as targets are far away from the center. When calibration
objects are not allowed to be put in the sensing area of interest, they should be placed somewhere else
outside. A scheme of calibration is proposed for system parameters correction in super-size two dimensional
scale events sensing and positioning system using binocular stereo vision. The location of a reference point is
specified at the center of the surveillance area of about 200,000 meters square, and the coordinate of the
center is known in the physical world. Coordinates of cameras and calibration objects outside the surveillance
area are measured in physical world coordinates system. It is convenient to compute the angle between the
side connecting camera with the calibration object and the one connecting the same camera with the reference
point using the longitude and altitude values measured. When calibration is performed, orientation of the
camera is obtained and position of the object on the imaging plane is read out in pixels. Rotating the camera
with the angle computed above, the reference point would be on the optical axis of the camera in the ideal
case. The accuracy of the device for measuring the angle contributes to the error in aligning the optical axis
passing through the reference point. In the experiment, when putting a calibration object at the reference point,
the position of the object on an imaging plane could be read out in pixels. Comparing the difference of pixels
between the two orientations of the camera, errors caused by rotating the camera can be determined. When
another camera is configured to form a binocular stereo vision system, parameters are calibrated in the same
way. Theoretical analysis shows that the error caused by adjusting two cameras is limited by a shape that
approximates a quadrilateral. The area of the quadrilateral is determined by both the accuracy of the angle
measuring device and the distances between the cameras and the reference point. Comparison of theoretical
with experimental results is made, indicating the effectiveness of this scheme.
At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to
monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the
airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface
surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular
vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving
objects such as aircrafts, vehicles and personnel location. This new model provides an important complement
for airport surface surveillance, which is different from the traditional surface surveillance radar techniques.
Such technique not only provides clear objects activities screen for the ATC, but also provides image
recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the
airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the
monocular visual imaging technology model applied in the airport surface surveillance and then the monocular
vision measurement accuracy analysis of the model. The monocular visual imaging technology model is
simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot
area of the surface surveillance radar monitoring and positioning systems.
Raman spectra and infrared imaging systems are used for the study of internal temperatures of PLEDs. The aim is to
investigate the thermal degradation of PLEDs with different current densities. Raman intensity is proportional to the
number of molecules in the next higher vibration energy level, and accurate internal temperature of PLEDs at thermal
equilibrium can be calculated with the ratio of anti-stokes to stokes Raman density by Boltzmann equation. With the
current density of PLED going from 0 mA/cm2 to 169 mA/cm2,it is found that the internal temperature of PLED increases accordingly. When the temperature comes to the glass transition temperature (Tg) of the emission layer, there is a phase change in it and the layer becomes free state as liquid, which is not stable. Local disfigurement in the emission layer results in short circuit between the cathode and the anode of a PLED, and the luminescence of PLED fails. Therefore, Raman spectrum is considered as a good method for detecting temperatures of thin-film semiconductor devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.