Depth mapping or depth sensing has become a popular field, applied not only to automotive
sensing for collision avoidance (radar) but also to gesture sensing for gaming and virtual
interfaces (optical). Popular gesture sensing devices such as the Kinect from Microsoft's Xbox
gaming device produce a full absolute depth map, which is in most cases not adapted to the task
on hand (relative gesture sensing). We propose in this paper a new gesture sensing technique
through structured IR illumination to provide a relative depth mapping rather than an absolute
one, and this reducing the requirements on computing power and therefore enabling this
technology for wearable computing such as see through display.
Recently various HUD (Head Up Display) projection engines have been introduced to industry, suited to solve the stringent
price/size/functionality constrains the automotive market. Some of the most promising projectors are laser based and project
directly a far-field image superimposed onto the driver's field of view by the mean of an optical combiner.
One of the major drawbacks of such technology is the parasitic speckle produced by the coherent nature of the illumination
(laser light), which has been proven to be very annoying and distracting for the driver. We propose a method to overcome the
parasitic speckle phenomenon in the HUD application, not by reducing it directly but rather by uniformizing the speckle
within the integration time of the eye, through generation of an orthogonal set of speckle modes for a single projected image.
Hybrid CSP / CPV (Concentrating Solar Power / Concentration Photovoltaic) systems provide a good
alternative to traditional CPV systems or CSP trough architectures. Such systems are often described as
solar cogeneration systems.
Trough systems use mainly the IR portion of the spectrum in order to heat up a pipe in which water is
circulating. CPV systems use only the visible portion of the spectrum to produce the photo-voltaic
conversion. Due to the achromatic nature of traditional thermal trough CSP systems, it is very unlikely that
a CPV system can be integrated with a CSP system, even a low concentration CPV system (LCPV).
We propose a novel technique to implement a low concentration CSP/LCPV system which relies on
commercially available solar trough concentrators / trackers that use reflective stretched Mylar membranes.
However, here the Mylar is embossed with microstructures that act only on the visible portion of the
spectrum, leaving the infrared part of the solar spectrum unperturbed.
This architecture has many advantages, such as: the existing Mylar-based thermal trough architecture is left
unperturbed for optimal thermal conversion, with linear strips of PV cells located a few inches away from
the central water pipe; the infrared radiation is focused on the central pipe, away from the PV cells, which
remain relatively cool compared to conventional LCPV designs (only visible light (the PV convertible part
of the solar spectrum) is diffracted onto the PV cell strips); and the Mylar sheets can be embossed by
conventional roll-to-roll processes, with a one-dimensional symmetric micro-structured pattern.
We show how the positive master elements are designed and fabricated over a small area (using traditional
IC wafer fabrication techniques), and how the Mylar sheets are embossed by a recombined negative nickel
shim. We also show that such a system can efficiently filter the visible spectrum and divert it onto the
linear strips of PV cells, while leaving the infrared part of the spectrum un-perturbed, heating up the water
When a new technology is integrated into industry commodity products and consumer electronic
devices, and sold worldwide in retail stores, it is usually understood that this technology has then
entered the realm of mainstream technology and therefore mainstream industry.
Such a leap however does not come cheap, as it has a double edge sword effect: first it becomes
democratized and thus massively developed by numerous companies for various applications, but
also it becomes a commodity, and thus gets under tremendous pressure to cut down its production
and integration costs while not sacrificing to performance.
We will show, based on numerous examples extracted from recent industry history, that the field
of Diffractive Optics is about to undergo such a major transformation. Such a move has many
impacts on all facets of digital diffractive optics technology, from the optical design houses to the
micro-optics foundries (for both mastering and volume replication), to the final product
integrators or contract manufacturers.
The main causes of such a transformation are, as they have been for many other technologies in
industry, successive technological bubbles which have carried and lifted up diffractive optics
technology within the last decades. These various technological bubbles have been triggered
either by real industry needs or by virtual investment hype. Both of these causes will be discussed
in the paper.
The adjective ""digital"" in "digital diffractive optics" does not refer only, as it is done in digital
electronics, to the digital functionality of the element (digital signal processing), but rather to the
digital way they are designed (by a digital computer) and fabricated (as wafer level optics using
digital masking techniques). However, we can still trace a very strong similarity between the
emergence of micro-electronics from analog electronics half a century ago, and the emergence of
digital optics from conventional optics today.
We present a novel implementation of virtual optical interfaces for the transportation industry
(automotive and avionics). This new implementation includes two functionalities in a single
device; projection of a virtual interface and sensing of the position of the fingers on top of the
virtual interface. Both functionalities are produced by diffraction of laser light. The device we are
developing include both functionalities in a compact package which has no optical elements to
align since all of them are pre-aligned on a single glass wafer through optical lithography. The
package contains a CMOS sensor which diffractive objective lens is optimized for the projected
interface color as well as for the IR finger position sensor based on structured illumination. Two
versions are proposed: a version which senses the 2d position of the hand and a version which
senses the hand position in 3d.
The US Port Security Agency has strongly emphasized the needs for tighter control at transportation hubs.
Distributed arrays of miniature CMOS cameras are providing some solutions today. However, due to the
high bandwidth required and the low valued content of such cameras (simple video feed), large computing
power and analysis algorithms as well as control software are needed, which makes such an architecture
cumbersome, heavy, slow and expensive.
We present a novel technique by integrating cheap and mass replicable stealth 3D sensing micro-devices in
a distributed network. These micro-sensors are based on conventional structures illumination via successive
fringe patterns on the object to be sensed. The communication bandwidth between each sensor remains
very small, but is of very high valued content. Key technologies to integrate such a sensor are digital optics
and structured laser illumination.