An experimental investigation of super-resolution imaging from measurements of projections onto a random
basis is presented. In particular, a laboratory imaging system was constructed following an architecture that
has become familiar from the theory of compressive sensing. The system uses a digital micromirror array
located at an intermediate image plane to introduce binary matrices that represent members of a basis set.
The system model was developed from experimentally acquired calibration data which characterizes the system
output corresponding to each individual mirror in the array. Images are reconstructed at a resolution limited
by that of the micromirror array using the split Bregman approach to total-variation regularized optimization.
System performance is evaluated qualitatively as a function of the size of the basis set, or equivalently, the
number of snapshots applied in the reconstruction.
The MONTAGE program sponsored by the Microsystems Technology Office of the Defense Advanced Research
Projects Agency (DARPA) resulted in the demonstration of a novel approach to designing compact imaging systems.
This approach was enabled by an unusual four-fold annular lens originally designed and demonstrated for operation
exclusively in the visible spectral band. To accomplish DARPA's goal of an ultra-thin imaging system, the folded optic
was fabricated by diamond-turning concentric aspheric annular zones on both sides of a CaF2 core. The optical
properties of the core material ultimately limit the operating bandwidth of such a design. We present the latest results of
an effort to re-engineer and demonstrate the MONTAGE folded optics for imaging across a broad spectral band. The
broadband capability is achieved by taking advantage of a new design that substitutes a hollow core configuration for the solid core. Along with enabling additional applications for the folded optics, the hollow-core design offers the potential of reducing weight and cost in comparison to an alternative solid-core design. We present new results characterizing the performance of a lens based on the new design and applied to long-wave infrared imaging.
Light field cameras can simultaneously capture the spatial location and angular direction of light rays emanating from a
scene. By placing a variable bandpass filter in the aperture of a light field camera, we demonstrate the ability to
multiplex the visible spectrum over this captured angular dimension. The result is a novel design for a single-snapshot
multispectral imager, with digitally reconstructed images exhibiting reduced spatial resolution proportional to the
number of captured spectral channels. This paper explores the effect of this spatial-spectral resolution tradeoff on
camera design. It also examines the concept of utilizing a non-uniform pinhole array to achieve varying spectral and
spatial capture over the extent of the sensor. Images are presented from several different light field - variable bandpass
filter designs, and limitations and sources of error are discussed.
Many applications require the ability to image a scene in several different narrow spectral bands simultaneously. Conventional multi-layer dielectric filters require control of film thickness to change the resonant wavelength. This makes it difficult to fabricate a mosaic of multiple narrow spectral band transmission filters monolithically. We adjusted the spectral transmission of a multi-layer dielectric filter by drilling a periodic array of subwavelength holes through the stack. Multi-band photonic crystal filters were modeled and optimized for a specific case of filtering six optical bands on a single substrate. Numerical simulations showed that there exists a particular air hole periodicity which maximizes the minimum hole diameter. Specifically for a stack of SiO2 and Si3N4 with the set of filtered wavelengths (nm): 560, 576, 600, 630, 650, and 660, the optimal hole periodicity was 282 nm. This resulted in a minimum hole diameter of 90 nm and a maximum diameter of 226 nm. Realistic fabrication tolerances were considered such as dielectric layer thickness and refractive index fluctuations, as well as vertical air hole taper. Our results provide a reproducible methodology for similar multi-band monolithic filters in either the optical or infrared regimes.
Many applications require the ability to image a scene in several different narrow spectral bands simultaneously.
Absorption filters commonly used to generate RGB color filters do not have the flexibility and narrow band filtering
ability. Conventional multi-layer dielectric filters require control of film thickness to change the resonant wavelength.
This makes it difficult to fabricate a mosaic of multiple narrow spectral band transmission filters monolithically. This
paper extends the previous work in adjusting spectral transmission of a multi-layer dielectric filter by drilling a periodic
array of subwavelength holes through the stack. Multi-band photonic crystal filters were modeled and optimized for a
specific case of filtering six optical bands on a single substrate. Numerical simulations showed that there exists a
particular air hole periodicity which maximizes the minimum hole diameter. Specifically for a stack of SiO2 and Si3N4 with the set of filtered wavelengths (nm): 560, 576, 600, 630, 650, and 660, the optimal hole periodicity was 282 nm. This resulted in a minimum hole diameter of 90 nm and a maximum diameter of 226 nm. Realistic fabrication tolerances
were considered such as dielectric layer thickness and refractive index fluctuations, as well as vertical air hole taper. It
was found that individual layer fluctuations have a minor impact on filter performance, whereas hole taper produces a
large peak shift. The results in this paper provide a reproducible methodology for similar multi-band monolithic filters in
either the optical or infrared regimes.
We describe an approach to polarimetric imaging based on a unique folded imaging system with an annular aperture.
The novelty of this approach lies in the system's collection architecture, which segments the pupil plane to measure the
individual polarimetric components contributing to the Stokes vectors. Conventional approaches rely on time sequential
measurements (time-multiplexed) using a conventional imaging architecture with a reconfigurable polarization filter, or
measurements that segment the focal plane array (spatial multiplexing) by super-imposing an array of polarizers. Our
approach achieves spatial multiplexing within the aperture in a compact, lightweight design. The aperture can be
configured for sequential collection of the four polarization components required for Stokes vector calculation or in any
linear combination of those components on a common focal plane array. Errors in calculating the degree of polarization
caused by the manner in which the aperture is partitioned are analyzed, and approaches for reducing that error are
investigated. It is shown that reconstructing individual polarization filtered images prior to calculating the Stokes
parameters can reduce the error significantly.
An investigation of power and resolution for laser ranging sensors is performed in relation to sense and avoid
requirements of miniature unmanned aircraft systems (UAS). Laser rangefinders can be useful if not essential
complements to video or other sensing modalities in a sense and avoid sensor suite, particularly when applied to
miniature UAS. However, previous studies addressing sensor performance requirements for sense and avoid on UAS
have largely concentrated, either explicitly or implicitly, on passive imaging techniques. These requirements are then
commonly provided in terms of an angular resolution defined by a detection threshold. By means of a simple geometric
model, it is assumed that an imaging system cannot distinguish an object that subtends less than a minimum number of
detector pixels. But for sensors based on active ranging, such as laser rangefinders and LADAR, detection probability is
coupled to the optical power of the laser transmitter. This coupling enables the sensors to achieve sub-pixel resolution,
or resolution better than the instantaneous field-of-view, and to compensate for insufficient angular resolution by
increasing transmitter power. Consequently, when considering sense and avoid detection requirements for laser
rangefinders or LADAR, a tradeoff emerges between resolution and power, which, owing to the inherent relationship of
size and weight to system resolution, translates to a tradeoff between resolution and sensor size, weight, and power. In
this presentation, we investigate the existence of an optimum compromise between sensor resolution and power,
concentrating on platforms with particularly challenging payload limitations.
Computational imaging systems are characterized by a joint design and optimization of front end optics, focal plane
arrays and post-detection processing. Each constituent technology is characterized by its unique scaling laws. In this
paper we will attempt a synthesis of the behavior of individual components and develop scaling analysis of the jointly
designed and optimized imaging systems.
Many visible and infrared sampled imaging systems suffer from moderate to severe amounts of aliasing. The problem arises because the large optical apertures required for sufficient light gathering ability result in large spatial cutoff frequencies. In consumer grade cameras, images are often undersampled by a factor of twenty times the suggested Nyquist rate. Most consumer cameras employ birefringent blur filters that purposely blur the image prior to detection to reduce Moire artifacts produced by aliasing. In addition to the obvious Moire artifacts, aliasing introduces other pixel level errors that can cause artificial jagged edges and erroneous intensity values. These types of errors have led some investigators to treat the aliased signal as noise in imaging system design and analysis. The importance of aliasing is dependent on the nature of the imagery and the definition of the assessment task. In this study, we employ a laboratory experiment to characterize the nature of aliasing noise for a variety of object classes. We acquire both raw and blurred imagery to explore the impact of pre-detection antialiasing. We also consider the post detection image restoration requirements to restore the in-band image blur produced by the anti-aliasing schemes.
Proc. SPIE. 4736, Visual Information Processing XI
KEYWORDS: Signal to noise ratio, Optical transfer functions, Visual information processing, Imaging systems, Spatial frequencies, Sensors, Signal attenuation, Optical design, Optical filters, Phase only filters
Aliasing is introduced in sampled imaging systems when light level requirements dictate using a numerical aperture that passes spatial frequencies higher than the Nyquist frequency set by the detector. One method to reduce the effects of aliasing is to modify the optical transfer function so that frequencies that might otherwise be aliased are removed. This is equivalent to blurring the image prior to detection. However, blurring the image introduces a loss in spatial detail and, in some instances, a decrease in the image signal-to-noise ratio. The tradeoff between aliasing and blurring can be analyzed by treating aliasing as additive noise and using information density to assess the imaging quality. In this work we use information density as a metric in the design of an optical phase-only anti-aliasing filter. We used simulated annealing to determine a pupil phase that modifies the system optical transfer function so that the information density is maximized. Preliminary results indicate that maximization of the information density is possible. The increase in information density appears to be proportional to the logarithm of the electronic signal-to-noise ratio and insensitive to the number of phase levels in the pupil phase. We constrained the pupil phase to 2, 4, 8, and 256 phase quantization levels and found little change in the information density of the optical system. Random and zero initial-phase inputs also generated results with little difference in their final information densities.
Classical optical design techniques are oriented toward optimizing imaging system response at a single image plane. Recently, researchers have proposed to greatly extend the imaging system depth of field, by introducing large deformations of the optical wavefront, coupled with subsequent post-detection image restoration. In one case, a spatially separable cubic phase plate is placed at the pupil plane of an imaging system to create an extremely large effective depth of field. The price for this extended performance is noise amplification in the restored imagery relative to a perfectly focused image. In this paper we perform a series of numerical design studies based on information theoretic analyses to determine when a cubic phase system is preferable to a standard optical imaging system. The amount of optical path difference (OPD) associated with the cubic phase plate is directly related to the amount of achievable depth of field. A large OPD allows greater depth of field at the expense of greater noise in the restored image. The information theory approach allows the designer to study the effect of the cubic phase OPD for a given depth of field requirement.
Charge coupled-device imaging systems are often designed so that the image of the object field is sampled well below the Nyquist limit. Undersampled designs frequently occur because of the need for optical apertures that are large enough to satisfy the detector sensitivity requirements. Consequently, the cutoff frequency of the aperture is well beyond the sampling limits of the detector array, and aliasing artifacts degrade the resulting image. A common antialiasing technique in such imaging systems is to use birefringent plates as a blur filter. The blur filter produces a point spread function (PSF) that resembles multiple replicas of the optical system PSF, with the separation between replicas determined by the thickness of the plates. When the altered PSF is convolved with the PSF of the detector, an effective pixel is produced that is larger than the physical pixel and thus, the higher spatial frequency components and the associated aliasing are suppressed. Previously, we have shown how information theory can be used in designing birefringent blur filters by maximizing the information density of the image. In this paper, we investigate the effects of spherical aberration and defocus on the information density of an imaging system containing a birefringent blur filter.
An optical outer product architecture is presented which performs residue arithmetic operations via position-coded look-up tables. The architecture can implement arbitrary integer- valued functions of two independent variables in a single gate delay. The outer product configuration possesses spatial complexity (gate count) which grows linearly with the size of the modulus, and therefore with the system dynamic range, in contrast to traditional residue look-up tables which have quadratic growth in spatial complexity. The use of linear arrays of sources and modulators leads to power requirements that also grew linearly with the size of the modulus. Design and demonstration of a proof-of-concept experiment are also presented.
We have investigated several nanosecond nonlinear switching mechanisms in carbon microparticle suspensions. These switching mechanisms are based on combinations of effects such as plasma scattering and cavitation-induced total internal reflection (TIR). The contributions from each of these effects is studied. The dominant nonlinear switching mechanism in the majority of these samples is laser induced cavitation which leads to TIR switching. This occurs when the incident laser energy that is absorbed by a carbon particle is sufficient to heat up and vaporize a small volume of the suspending liquid forming a microbubble. TIR switching is observed when these vapor bubbles expand to dimensions that are similar to the transverse dimensions of the incident beam and form a glass-vapor interface at the front substrate surface. Using this mechanism, nonlinear refractive index changes as large as 0.3 have been experimentally obtained on a nanosecond time scale using low power, Q-switched, frequency doubled ((lambda) equals 0.532 micrometers ), Nd:YAG laser pulses.