Hyperspectral imaging allows the collection of both spectral and spatial information. This modality is naturally fitted for object and material identification or detection processes, and has encountered a large success in the agriculture and food industries to name a few.
In snapshot spectral imaging, the 3D cube of images is taken in one shot, with the advantage that dynamic scenes can be analyzed. The simplest way to make a hyperspectral camera is to put an array of wavelength filters on the detector and then integrate this detector with standard camera objectives. The technical challenge is to make arrays of N wavelength filters and repeat this sequence up to 100‘000 times across the detector array, where each individual filter is matched to the pixel size and can be as small as a few microns.
In this work, we generate the same effect with just one N wavelength filter array which is then multiplied and imaged optically onto the detector to achieve the same effective filter array. This was first outlined by Levoy and Hoystmeyer using microlens arrays in a light field camera (plenoptics 1.0). Instead of building our own light field camera we used an existing commercial camera, Lytro™ and used it as the engine for our telecentric hyperspectral camera. In addition, the tools to extract and rebuild the raw data from the Lytro™ camera were developed.
We demonstrate reconstructed hyperspectral images with 9 spectral channels and show how this can be increased to achieve 81 spectral channels in a single snapshot.
In this paperwe present a novel approach for depth estimation and background subtraction in light field images. our
approach exploits the regularity and the internal structure of the light field signal in order to extract an initial depth
map of the captured scene and uses the extracted depth map as the input to a final segmentation algorithm which
finely isolates the background in the image.
Background subtraction is natural application of the light field information since it is highly involved with depth
information and segmentation. However many of the approaches proposed so far are not optimized specifically
for background subtraction and are highly computationally expensive. Here we propose an approach based on a
modified version of the well-known Radon Transform and not involving massive matrix calculations. It is therefore
computationally very efficient and appropriate for real-time use.
Our approach exploits the structured nature of the light field signal and the information inherent in the plenoptic
space in order to extract an initial depth map and background model of the captured scene. We apply a modified
the Radon transform and the gradient operator to horizontal slices of the light field signal to infer the initial depth
map. The initial depth estimated are further refined to a precise background using a series of depth thresholding and
segmentation in ambiguous areas.
We test on method on various types real and synthetic of light field images. Scenes with different levels of clutter
and also various foreground object depth have been considered in the experiments. The results of our experiments
show much better computational complexity while retaining comparable performance to similar more complex
methods.
We propose a scale-invariant feature descriptor for representation of light-field images. The proposed descriptor can
significantly improve tasks such as object recognition and tracking on images taken with recently popularized light
field cameras.
We test our proposed representation using various light field images of different types, both synthetic and real.
Our experiments showvery promising results in terms of retaining invariance under various scaling transformations.
We present LCAV-31, a multi-view object recognition dataset designed specifically for benchmarking light field image analysis tasks. The principal distinctive factor of LCAV-31 compared to similar datasets is its design goals and availability of novel visual information for more accurate recognition (i.e. light field information). The dataset is composed of 31 object categories captured from ordinary household objects. We captured the color and light field images using the recently popularized Lytro consumer camera. Different views of each object have been provided as well as various poses and illumination conditions. We explain all the details of different capture parameters and acquisition procedure so that one can easily study the effect of different factors on the performance of algorithms executed on LCAV-31. Moreover, we apply a set of basic object recognition algorithms on LCAV-31. The results of these experiments can be used as a baseline for further development of novel algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.