Distributed reconfigurable remote sensing satellite system can realize flexible and robust remote sensing network. Imaging system for distributed reconfigurable remote sensing satellite system should consume fewer power and be lighter to be installed on small or micro satellite. In this paper imaging system based on CMOS for distributed reconfigurable remote sensing satellite system was put forward. Circuits of CMOS sensor driving and CMOS data processing were detailed. In order to improve signal integrity circuits were divided into two sections which were connected with flexible printed circuit. CMOS sensor and its power circuits were placed in the focal plane printed circuit board. Imaging control box consisted of control and data processing circuits of CMOS sensor as well as communication and image data transferring circuits. Impedance matching was used to improve signal integrity of signals transmitted through flexible printed circuit between the focal plane printed circuit board and imaging control box. Results of experiments and analysis indicated that power dissipation of imaging system for distributed reconfigurable remote sensing satellite system was 4.05 W and the weight of imaging control box was 0.435 kg. SNR of each band of imaging system for distributed reconfigurable remote sensing satellite system was more than 41.28 dB as sun zenith angle is 20º and earth reflectance is 0.65.
Time delay and integration (TDI) CCD or CMOS sensors are widely used in space cameras to realize high resolution. Fixed TDI stages are often used when space cameras are working to avoid image saturation. As a result images obtained have problems such as short of digital number levels. In this paper an automatic on-orbit adjusting TDI stages method of space camera based on sun zenith angles was put forward. In this method radiance at entrance pupil of space camera was calculated in real time by nonlinear function fitting according to sun zenith angles of ground targets. Then optimal time of exposure could be obtained. Line transfer time for image compensation was calculated at the same time. Finally TDI stage was acquired by optimal time of exposure and line transfer time. In this way real-time calculation and adjustment of TDI stages were realized. Results of comparison of simulation by STK and computation by MODTRAN indicate that standard deviation of radiance fitting errors is about 0.175W/m2 ∙s and relative radiance fitting error is no more than 0.93% as sun zenith angle belongs to [5°, 65°]. Dynamic changes of SNR along with changes of time and sun zenith angle using adjusting TDI stages method and fixed TDI stages method were compared and analyzed. Results of analysis indicated that SNR would be improved from 23 dB to 39dB as sun zenith angle was 88.2° when automatic on-orbit adjusting TDI stages method was used in comparison with fixed TDI stages method.
In order to improve the efficiency of remote sensing image data storage and transmission we present a method of the
image compression based on lifting scheme and modified SPIHT(set partitioning in hierarchical trees) by the design of
FPGA program, which realized to improve SPIHT and enhance the wavelet transform image compression. The lifting
Discrete Wavelet Transform (DWT) architecture has been selected for exploiting the correlation among the image pixels.
In addition, we provide a study on what storage elements are required for the wavelet coefficients. We present lena’s
image using the 3/5 lifting scheme.
Modulation Transfer Function(MTF) is an important imaging quality indicator of optical system, in order to evaluate the
imaging quality for a space remote sensing TDICCD camera, the MTF testing system，including hardware components
and software components, was established based on refocusing system. Firstly, the design of Three Mirror Anastigmatic
(TMA) optical system and the long interlace assemble TDICCD focal plane were introduced. Secondly, the construction
and the schematic diagram of the refocusing system was presented in detail, as well as the relationship between
refocusing range and position encoder for interlace assemble TDICCD focal plane. Thirdly, the schematic diagram of the
MTF testing system was given, and the relationship between MTF and contrast transfer function(CTF) was discussed.
Finally, on the basis of the schematic diagram of CTF, the transfer process of a square wave target was investigated. The
testing results indicate that the MTF testing system can measure the CTF of every TDICCD on the focal plane in the
focusing range, which meet the requirement of evaluation of imaging quality for a space TDICCD camera.
Proc. SPIE. 9674, AOPC 2015: Optical and Optoelectronic Sensing and Imaging Technology
KEYWORDS: Cooling systems, Signal to noise ratio, CMOS sensors, Mechanics, Imaging systems, Cameras, Quantum efficiency, Electron multiplying charge coupled devices, Back illuminated sensors, Systems modeling
Electron Multiplying Charge Coupled Device(EMCCD) can realize read out noise of less than 1e- by promoting gain of charges with the charge multiplication principle and is suitable for low light imaging. With the development of back Illuminated CMOS technology CMOS with high quantum efficiency and less than 1.5e- read noise has been developed by Changchun Institute of Optics, Fine Mechanics and Physics(CIOMP). Spaceborne low light detection cameras based on EMCCD CCD201 and based on CMOS were respectively established and system noise models were founded. Low light detection performance as well as principle of spaceborne camera based on EMCCD and spaceborne camera based on CMOS were compared and analyzed. Results of analysis indicated that signal to noise(SNR) of spaceborne low light detection camera based on EMCCD would be 23.78 as radiance at entrance pupil of the camera was as low as 10-9 W/cm2/sr/μm at the focal plane temperature of 20°C. Spaceborne low light detection camera worked in starring mode and the integration time was 2 second. SNR of low light detection camera based on CMOS would be 27.42 under the same conditions. If cooling systems were used and the temperature was lowered from 20°C to -20°C, SNR of low light detection camera based on EMCCD would be improved to 27.533 while SNR of low light detection camera based on CMOS would be improved to 27.79.
This paper describes the design and realization of a refocusing system for a space TDICCD camera of 2.2-meter focal length, which, features a three mirror anastigmatic(TMA) optical system along with 8 TDICCDs assemble at the focal plane, is high resolution and wide field of view. TDICCDs assemble is a kind of major method of acquiring wide field of view for space camera. In this way, the swath width reach 60km. First, the design of TMA optical system and its advantage of this space TDICCD camera was introduced; Then, the refocusing system as well as the technique of mechanical interleaving assemble for TDICCDs focal plane of this space camera was discussed in detail, At last, the refocusing system was measured. Experimental results indicated that the precision of the refocusing system is ± 3.12μm(3σ), which satisfy the refocusing control system requirements of higher precision and stabilization.
As a key technology to improve the imaging quality of remote multispectral CCD camera, the performance of a focusing system for multispectral CCD camera was presented in detail in this paper. Firstly, the focusing precision required was calculated in the optical system. The method of direct adjusting multispectral CCD focal plane was proposed, which was suitable for this multispectral CCD camera optical system. Secondly, we developed a focusing system which has the advantages of lower constructional complexity, easier hardware implementation and high focusing sensitivity. Finally, experimental test was constructed to evaluate the focusing precision performance of the focusing system. The result of focusing precision test is 3.62μm(3σ) in a focusing range of ±2.5mm. The experimental result shows that the focusing system we proposed is reasonable, and reliability as well as stable, which meet the focusing precision requirements for multispectral CCD camera.
Mapping precision of space-borne stereo mapping camera is primarily determined by attitude angle errors of the satellite. Time synchronization errors of space-borne stereo mapping camera will bring on extra attitude angle errors. In this paper model of space-borne stereo mapping camera was established in satellite tool kit (STK) to obtain the regularity of attitude angles changing with time. Influence of space-borne stereo mapping camera’s time synchronization precision on attitude angle errors was analyzed by combing the regularity of attitude angles changing with time and the sampling theory. As a result digitalized model of extra attitude angle errors and time synchronization errors of space-borne stereo mapping camera was put forward. In validation experiments real attitude angle data of a stereo mapping satellite were collected and extra attitude angle errors caused by specific time synchronization errors of space-borne stereo mapping camera were obtained. Results of the experiments and analysis indicated that extra attitude angle errors caused by specific time synchronization error could be reduced from 0.01939 arc second to 0.00003879 arc second as time synchronization precision was optimized from 1ms to 20μs.
Multilinear CCD Sensor was often used on space cameras to obtain multispectral images with each line representing
different band channels. However images of different band channels obtained at the same time didn't coincide as there
were spaces between lines. Pixel numbers to be adjusted between images of different channels varied when the space
camera worked by swaying forward and backward or adjusted row transfer period to compensate image movement. An
automatic multispectral images synthesis algorithm of space camera was put forward on the basis of analysis of such
phenomenon. In this algorithm a new evaluation function was used to determine pixel numbers to be adjusted and the
image regions of each band channel to be clipped. In this way images of different band channels could be synthesized
automatically to obtain an accurate colorful image. This algorithm can be used to dispose a large mount of images from
space camera directly without any manual disposal so that efficiency could be improved remarkably. In validation
experiments the automatic multispectral images synthesis algorithm was applied in synthesis of images obtained from
outside scene experiment of a multispectral space camera. Result of validation experiments proved that the automatic
multispectral images synthesis algorithm can realize accurate multispectral images synthesis of space camera and the
efficiency can be improved markedly.
Detection sensitivity is an important parameter of star tracker, which can express detection limit of the weakest detected visual star magnitude. Detection sensitivity of APS star tracker is related to APS parameters, optical system aperture and lens permeance rate, SNR, and so on. This paper, the expressions of APS star detected signal and APS noise are given, so signal-noise ratio (SNR) of star tracker can be obtained. And based on the theory inspecting signal from noise and optimal SNR threshold detection principle, the detection sensitivity model can be obtained. The corresponding APS star tracker detection limit is obtain based on APS IBIS5 and general optical system parameter design. When SNR threshold is 8.1 which can obtain 99.9% detection probability, we can calculate that the detection sensitivity is 6.5 visual magnitude.
Transmit array photogrammetric camera can obtain image which is high geometric fidelity and high photogrammetric quality. However, the single chip array CCD image sensor camera can't meet the need of measuring precision and photogrammetric covering area. In order to obtain large numbers of information and extensive photogrammetric covering area, we must increase field of vision angle and improve photogrammetric covering area. And all these objects can be realized by exterior field of vision assembling photogrammetric camera. Two side work must be done before images, which obtained by exterior field of vision assembling photogrammetric camera, be used in photogrammetry. First, all assembling camera focal plane need be converted to a benchmark coordinate focal plane to realize camera digital assembling. Second, images must be re-sampled and processing. Because of coordinate conversion, two images from different assembling cameras can be established function relation, which a pixel of image from a camera is corresponding to a pixel of another image from different camera. But through this conversion, some pixels maybe extrusion together and other pixels separate on an image area. So interpolation direction finding(IDF) is used to obtain these pixels and realize image re-sampling. In this paper, the structure of exterior field of vision assembling photogrammetric camera is analyzed, and the coordinate conversion method of exterior field of vision assembling photogrammetric camera and image gray re-sampling method also can be discussed. All the works are based to data pretreatment of exterior vision assembling photogrammetric camera.
The geometrical Modulation Transfer Function (MTF) of CMOS APS (active pixel sensor) is analyzed in this paper. Advanced APS have been designed and fabricated where different pixel shapes such as square, rectangle and L shape, were placed, because the amplifier circuit and other function circuits inter pixel of APS take up some pixel area. MTF is an important figure of merit in focal plane array imaging sensors. Research on analyzing the MTF for the proper pixel shape is currently in progress for a centroidal configuration of a target position. MTF will give us a more complete understanding of the tradeoffs opposed by the different pixel designs and by the signal processing conditions. Based on image sensor sampling and reconstructing model, the MTF expression of any active pixel shape has been deduced in this paper. According to actual pixel shape, three different active area pixels were analyzed, they were square, rectangle, and L shape, their Fill Factor (FF) is 30%, 44% and 55%, respectively. Results of simulation experiments indicate that different pixel geometrical characteristics contribute significantly to the figures of their MTF. Different geometrical shape of active sensitive area of pixel and different station in pixel would influence MTF figures. The analysis results are important in designing better APS pixel and more important in analyzing imaging system performance of APS subpixel precision system.
Active pixel sensor (APS) star tracker becomes an investigated hotspot because of its technical advantages. And centroid algorithm is a subpixel method proper to star position calculation because of its high accuracy and simplicity. When centroid algorithm is applied on APS star tracker, APS pixel geometrical characteristics might effect on star image position accuracy. Because the amplifier circuit and other function circuits inter pixel of APS take up some pixel area, the Fill Factor is less than 100%. Moreover, the active sensitive area has a certain geometrical shape, such as square, rectangle and L shape. The Fill Factor of pixel influences on star image subdivided locating accuracy when using centioid algorithm. In this paper, we have analyzed all pixel geometrical characteristics influence on the star position accuracy. From simulation experiments, we can conclude that Fill factor and pixel geometric shape influence on star position accuracy. The star locating error increased when Fill Factor decreased, and different geometrical shape of active sensitive area of pixel can make different influence on star location accuracy, the symmetrical sensitive area in x or y axis have symmetrical location error in the same axis.
Three-Line CCD Camera is a multi-channel solid mapping camera. There are three signal channels in one camera, and twelve channels in three cameras for the three-line CCD camera. Because there is a little difference between different channels, the image grey of different camera channels is a little difference for the same region. Those differences include in two aspect: the difference between the cameras each other and the difference between different channels of the same camera. So they make a mistake when the images of different channels are matched. In the paper, the camera response curves have been described through ground radiation calibration to the multi-channel camera, furthermore, the channels calibration coefficients have been gotten based on making a certain the output basis channel. So the differences between different channels are reduced through radiation calibrating each channel, and the outputs of all channels are almost same, the accuracy of matching image is improved.
Small satellites are capable of performing space explore missions that require accurate attitude determination and control. However, low weight, size, power and cost requirements limit the types of attitude sensor of small craft, such as CCD, are not practical for small satellites. CMOS APS is a good substitute for attitude sensors of small craft. Some of the technical advantages of CMOS APS are no blooming, single power, low power consumption, small size and little support circuitry, direct digital output, simple to system design, in particular, radiation-hard characteristic compare with CCD. This paper discusses the application probability of CMOS APS in star tracker for small satellites, further more, a prototype ground-based star camera based on STAR250 CMOS image sensor has been built. In order to extract stars positions coordinates, subpixel accuracy centroiding algorithm has been developed and tested on some ground-based images. Moreover, the camera system star sensitivity and noise model are analyzed, and the system accuracy is been evaluated. Experimental results indicate that a star camera based on CMOS APS is a viable practical attitude sensor appropriate for space small satellites.
Tungsten oxide nanorods can be fabricated in large scale at low temperatures on planar substrates. The structure and the optical properties of the nanorods are investigated by SEM, TEM, X-ray diffraction and optical spectrometers, respectively. The dependence of the orientation preference of the nanorods on the growth conditions is also investigated.