Digital video cameras are the main component of both visual and measuring optoelectronic devices. The parameters and characteristics of video cameras can be varied significantly for each instance. Since the characteristics of a video camera mostly determine the characteristics of the entire device, it is important to monitor them. This will ensure the stability of the characteristics of video cameras, and consequently, image stability and improvement of measurement accuracy in the case of using video cameras in optical-electronic measuring devices. This paper presents an experimental test bench designed to study the parameters of serial video cameras based on CMOS matrix photodetectors. Method are proposed for determining such camera parameters as irregularity of photosensitivity around the site, as well as change in the signal-to-noise ratio with variations in the level of exposure, exposure time and amplifier coefficient.
Position control of multiple objects is one of the most actual problems in various technology areas. For example, in construction area this problem is represented as multi-point deformation control of bearing constructions in order to prevent collapse, in mining – deformation control of lining constructions, in rescue operations – potential victims and sources of ignition location, in transport – traffic control and traffic violations detection, in robotics –traffic control for organized group of robots and many other problems in different areas. Usage of stationary devices for solving these problems is inappropriately due to complex and variable geometry of control areas. In these cases self-organized systems of moving visual sensors is the best solution. This paper presents a concept of scalable visual sensor network with swarm architecture for multiple object pose estimation and real-time tracking. In this article recent developments of distributed measuring systems were reviewed with consequent investigation of advantages and disadvantages of existing systems, whereupon theoretical principles of design of swarming visual sensor network (SVSN) were declared. To measure object coordinates in the world coordinate system using TV-camera intrinsic (focal length, pixel size, principal point position, distortion) and extrinsic (rotation matrix, translation vector) calibration parameters were needed to be determined. Robust camera calibration was a too resource-intensive task for using moving camera. In this situation position of the camera is usually estimated using a visual mark with known parameters. All measurements were performed in markcentered coordinate systems. In this article a general adaptive algorithm of coordinate conversion of devices with various intrinsic parameters was developed. Various network topologies were reviewed. Minimum error in objet tracking was realized by finding the shortest path between object of tracking and bearing sensor, which set global coordinate system. Weight coefficients were determined by experimental researches of system sensors that are represented in this article. Conclusions obtained from this work are the basement for SVSN prototypes production and its future researches.
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.
The omnidirectional cameras are used in areas where large field-of-view is important. Omnidirectional cameras can give a complete view of 360° along one of direction. But the distortion of omnidirectional cameras is great, which makes omnidirectional image unreadable. One way to view omnidirectional images in a readable form is the generation of panoramic images from omnidirectional images. At the same time panorama keeps the main advantage of the omnidirectional image - a large field of view. The algorithm for generation panoramas from omnidirectional images consists of several steps. Panoramas can be described as projections onto cylinders, spheres, cubes, or other surfaces that surround a viewing point. In practice, the most commonly used cylindrical, spherical and cubic panoramas. So at the first step we describe panoramas field-of-view by creating virtual surface (cylinder, sphere or cube) from matrix of 3d points in virtual object space. Then we create mapping table by finding coordinates of image points for those 3d points on omnidirectional image by using projection function. At the last step we generate panorama pixel-by-pixel image from original omnidirectional image by using of mapping table. In order to find the projection function of omnidirectional camera we used the calibration procedure, developed by Davide Scaramuzza – Omnidirectional Camera Calibration Toolbox for Matlab. After the calibration, the toolbox provides two functions which express the relation between a given pixel point and its projection onto the unit sphere. After first run of the algorithm we obtain mapping table. This mapping table can be used for real time generation of panoramic images with minimal cost of CPU time.
The research of the nonexcluded air control error componentin optoelectronic systems the spatial position of distant objects control based on dispersion method is provided. It is invited to consider the influence of the air tract on the direction of an optical beam. There is some methods of the nonexcluded air control error component determination in the paper.