Fourier interferometry (FI) is a robust and simple method for reconstruction of phase modulated objects. Its main
advantage is that it only requires the recording of one interference pattern, avoiding the problems caused by mechanical
vibrations in the optical setup. Unfortunately, the conventional FI method can be employed in digital holography (DH)
only for samples with reduced frequency bandwidth, because of the low spatial resolution of available electronic image
recording devices (CCDs). We report two methods that partially overcome this bandwidth limitation of the FI technique
implemented in DH. The first of these methods consists in a modified version of the conventional FI approach where,
instead of processing the Fourier transform of the hologram, we calculate its Fresnel back propagation to the object
plane. Although this modified FI approach provides appropriate reconstruction of objects with increased bandwidth, the
local SNR tends to be low at thin phase modulated features of the object. In order to increase this SNR we employ the
sample reconstructed with the modified FI method as the input in a Gerchberg-Saxton (GS) iterative method. In this GS
procedure, the constraints are the modules of the fields at the object and at the hologram planes. This iterative method,
implemented by numerical simulations, provides highly accurate phase corrections with fast convergence.
Proc. SPIE. 6736, Unmanned/Unattended Sensors and Sensor Networks IV
KEYWORDS: Optical transfer functions, Point spread functions, Imaging systems, Spatial frequencies, Sensors, Deconvolution, Modulation transfer functions, Imaging arrays, Integral imaging, 3D image processing
Integral imaging systems permit to capture and reproduce three-dimensional scenes. However they usually suffer from a
limited depth of field and/or limited depth of focus that severely reduces the depth range that can be used in practice. In
order for such a system to be able to capture and reproduce large-depth scenes without any adjustment, we propose to
include in the integral imaging system an array of phase masks in order to increase the depth of focus of the final three-dimensional
images. We consider both the pickup and the reconstruction stages.
The depth of field of optical systems can be extended with a phase mask placed in the pupil. We propose to design this
phase mask through a heuristic numerical optimization. We describe two different criteria to illustrate the technique. We
present the obtained phase profiles and we show that they have good properties regarding the extension of the depth of
We propose and describe an optoelectronic system that emulates a minimum digital system; which typically consists of
a microprocessor, a memory device, an input device and an output data device, with its corresponding data, control and
addressing busses. These devices work following a program which is stored in the memory device, as codified
instructions. In our proposal, the memory device is a reconfigurable single-lens holographic memory. The instructions
to be stored are coded and decoded as binary pages by software. The software interprets the data and carries out the
instructions as a microprocessor does in a minimum digital system. We present preliminary results of the performance
of our proposal.
This paper analyzes the security of amplitude encoding for double random phase encryption. We describe several types of attack. The system is found to be resistant to brute-force attacks but vulnerable to chosen and known plaintext attacks.
We discuss the symmetry properties of the Ambiguity Function. Next, we use it as a design tool for increasing the depth of field of imaging systems. We present a family of anti-symmetric phase-only masks that extend the depth of field of an optical system. We compute several Optical Transfer Functions with focus errors, and we report numerical simulations of the images that can be achieved using our proposed phase-only filters.
Cameras provide only bi-dimensional views of three-dimensional objects. These views are projections that change depending on the spatial orientation or pose of the object. In this paper we propose a technique to estimate the pose of a 3D object knowing only a 2D picture of it. The proposed technique explores both the linear and the nonlinear composite correlation filters in a combination with a neural network. We present results in estimating two orientations: in-plane and out-of-plane rotations within an 8 degree square range.
The two-dimensional view, obtained with a camera, of a three-dimensional (3-D) object varies with the 3-D orientation of this object, complicating the recognition task. In this work we address the problem of estimating the pose of a 3-D object knowing only a 2-D projection. The proposed technique is based on a combination of synthetic-discriminant-function filters and neural networks. We succeed in estimating two orientations: in-plane and out-of-plane rotations within a 8 degree square range.
We present a method to recognize three-dimensional objects from phase-shift digital holograms. The holograms are used to reconstruct various views of the objects. These views are combined to create nonlinear composite filters in order to achieve distortion invariance. We present experiments to illustrate the recognition of a 3D object in the presence of out-of-plane rotation and longitudinal shift along the z-axis.
We describe a number of new optoelectronic approaches to three-dimensional (3D) image recognition. In all the cases, digital holography is used to record the complex amplitude distribution of Fresnel diffraction patterns generated by 3D scenes illuminated with coherent light. This complex information is compared with that from a similar digital hologram of a 3D reference object using correlation methods. Pattern recognition techniques that are shift-variant or shift-invariant along the optical axis are described. In the latter case it is possible to detect the 3D position of the reference in the input scene with high accuracy. We use also a nonlinear composite correlation filter to achieve distortion tolerance. Experiments are presented to illustrate the recognition of a 3D object in the presence of out-of-plane rotation.
Design of an on-board processor that enables recognition of a given road sign affected by different distortions is presented. The road sign recognition system is based on a nonlinear processor. Analysis of different filtering methods allows us to select the best techniques to overcome a variety of distortions. The proposed recognition system has been tested in real still images as well as in video sequences. Scenes were captured in real environments, with cluttered backgrounds and contain many distortions simultaneously. Recognition results for various images show that the processor is able to properly detect a given road sign even if it is varying in scale, slightly tilted or viewed under different angles. Recognition is also achieved when dealing with partially occluded road signs. In addition, the system is robust to illumination fluctuations.
We present a phase mask that substantially reduces the influence of focus error of an optical system; while preserving light gathering power, and lateral resolution. Numerical simulations and first experimental results are shown.
We show that by using a binary spatial filter, and a square law detector, all the defocused OTF's are displayed in a single picture. This unique picture has as horizontal coordinate the spatial frequency, and as vertical coordinate the amount of defocus. The gray level variations of the picture are proportional to the values of the out-of-focus MTF.