Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital
refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a
microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage
the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity
camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To
achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model
that includes light source information, object information, optical system information, plenoptic image processing and
color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric
camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera
to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
KEYWORDS: Image fusion, Image processing, Digital signal processing, Wavelet transforms, Field programmable gate arrays, Embedded systems, Multispectral imaging, Logic, Wavelets, System on a chip
Combining the theory of wavelet transform based image fusion and SOPC design method, the authors uses SOPC as the core device to design and implement a image fusion system. The fusion system adopts the Verilog hardware description language, Dsp builder and Quartus II development platform together with macro module to complete the logic design and timing control of each module. In the fusion system, we can achieve simple pixel-level image fusion of two registered images. This design not only builds up an image fusion system based on SOPC in accident, but also provides a hardware design principle in SoPC for the future design and Implementation of more comprehensive function of image processing.
Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data
captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been
based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of
optimizing end-to-end system performance for a specific application. Such design optimization requires design
tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor
characteristics.
In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera
with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot
spectral data acquisition.1–3
We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we
briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality.
We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array
on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of
a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.
An optical tracking sensor that produces images containing the state of polarization of each pixel can be implemented using individual wire-grid micropolarizers on each detector element of a solid-state focal plane array. These sensors can significantly improve identification and tracking of various man-made targets in cluttered, dynamic scenes such as urban and suburban environments. We present electromagnetic simulation results for wire-grid polarizers that can be fabricated on standard imaging arrays at three different technology nodes (an 80-, 250-, and 500-nm pitch) for use in polarization-sensitive detector arrays. The degradation in polarizer performance with the larger pitch grids is quantified. We also present results suggesting the performance degradation is not significant enough to affect performance in a man-made vehicle-tracking application.
Conference Committee Involvement (2)
Digital Photography and Mobile Imaging XI
9 February 2015 | San Francisco, California, United States
Digital Photography X
3 February 2014 | San Francisco, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.