Proc. SPIE. 8713, Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications X
KEYWORDS: Signal to noise ratio, Image compression, Cameras, Image processing, Video, Field programmable gate arrays, High dynamic range imaging, Video processing, Modulation transfer functions, Commercial off the shelf technology
Two of the biggest challenges in designing U×V vision systems are properly representing high dynamic range scene content using low dynamic range components and reducing camera motion blur. SRI’s MASI-HDR (Motion Adaptive Signal Integration-High Dynamic Range) is a novel technique for generating blur-reduced video using multiple captures for each displayed frame while increasing the effective camera dynamic range by four bits or more. MASI-HDR processing thus provides high performance video from rapidly moving platforms in real-world conditions in low latency real time, enabling even the most demanding applications on air, ground and water.
Traditionally, daylight and night vision imaging systems have required image intensifiers plus daytime cameras. But SRI’s new NV-CMOS™ image sensor technology is designed to capture images over the full range of illumination from bright sunlight to starlight. SRI’s NV-CMOS image sensors provide the low light sensitivity approaching that of an analog image intensifier tube with the cost, power, ruggedness, flexibility and convenience of a digital CMOS imager chip. NV-CMOS provides multi-megapixels at video frame rates with low noise (<2 h+), high sensitivity across the visible and near infrared (NIR) bands (peak QE <85%), high resolution (MTF at Nyquist < 50% @ 650 nm), and extended dynamic range (<75 dB). The latest test data from the NV-CMOS imager technology will be presented.
Unlike conventional image intensifiers, the NV-CMOS image sensor outputs a digital signal, ideal for recording or sharing video as well as fusion with thermal imagery. The result is a substantial reduction in size and weight, ideal for SWaP-constrained missions such as UAVs and mobile operations. SRI’s motion adaptive noise reduction processing further increases the sensitivity and reduces image smear. Enhancement of moving targets in imagery captured under extreme low light conditions imposes difficult challenges. SRI has demonstrated that image registration provides a robust solution for enhancing global scene contrast under very low SNR conditions.
Fast moving cameras often generate distorted and blurred images characterized by reduced sharpness (due to motion
blur) and insufficient dynamic range. Reducing sensor integration times to minimize blur are often used but the light
intensity and image Signal-to-Noise-Ratio (SNR) would be reduced as well. We propose a Motion Adaptive Signal
Integration (MASI) algorithm that operates the sensor at a high frame rate, with real time alignment of individual image
frames to form an enhanced quality video output. This technique enables signal integration in the digital domain,
allowing both high SNR performance and low motion blur induced by the camera motion. We also show, in an
Extended MASI (EMASI) algorithm, that high dynamic range can be achieved by combining high frame rate images of
varying exposures. EMASI broadens the dynamic range of the sensor and extends the sensitivity to work in low light
and noisy conditions. In a moving platform, it also reduces static noise in the sensor. This technology can be used in
aerial surveillance, satellite imaging, border securities, wearable sensing, video conferencing and camera phone
When using the conventional fixed smoothing factor to display the stabilized video, we have the issue of large undefined
black border regions (BBR) when camera is fast panning and zooming. To minimize the size of BBR and also provide
smooth visualization to the display, this paper discusses several novel methods that have demonstrated on a real-time
platform. These methods include an IIR filter, a single Kalman filter and an interactive multi-model filter. The
fundamentals of these methods are to adapt the smoothing factor to the motion change from time to time to ensure small
BBR and least jitters. To further remove the residual BBR, the pixels inside the BBR are composited from the previous
frames. To do that, we first store the previous images and their corresponding frame-to-frame (F2F) motions in a FIFO
queue, and then start filling the black pixels from valid pixels in the nearest neighbor frame based on the F2F motion. If a
matching is found, then the search is stopped and continues to the next pixel. If the search is exhausted, the pixel remains
black. These algorithms have been implemented and tested in a TI DM6437 processor.
Image fusion is a process that combines regions of images from different sources into a single fused image based on a
salience selection rule for each region. In this paper, we proposed an algorithmic approach using a mask pyramid to
better localize the selection process. A mask pyramid operates in different scales of the image to improve the fused
image quality beyond a global selection rule. The proposed approach offers a generic methodology for applications in
image enhancement, high dynamic range compression, depth of field extension, and image blending. The mask pyramid
can also be encoded for intelligent analysis of source imagery. Several examples of this mask pyramid method are
provided to demonstrate its performance in a variety of applications. A new embedded system architecture that builds
upon the Acadia® II Vision Processor is proposed.