In this work we report an improved platform for testing and comparing particles for use in optical trap displays. We constructed seven prototypes, and deployed them to five different locations where they were successfully used to perform comparative optical particle trap tests. This improved rig makes it possible to expand optical trap display research by a decentralized group of citizen scientists.
We explore a number of diffractive structures for simultaneous trapping of particles in photophoretic traps for free space volumetric displays. Current optical trap displays move a single particle along a complicated path; to scale displays from 1 cm^3 to 100 cm^3 we aim to move multiple particles along a simple path. Preliminary results for two diffractive approaches are reported: i) a low-order binary grating, and ii) a Fresnel lens. Results are given for trap rate (N=50) for carbon particles at standard pressure and temperature.
In this work we report an improved platform for testing and comparing particles for use in optical trap displays. We constructed seven prototypes, and deployed them to five different locations where they were successfully used to perform comparative optical particle trap tests. This improved rig makes it possible to expand optical trap display research by a decentralized group of citizen scientists.
Photophoresis can trap opaque microscopic particles in a focused laser beam surrounded by a gas such as air. The particle is heated by the laser, and in turn, interactions with the ambient gas provide a stabilizing force that holds the particle in a specific region of the beam. The particles can stay trapped while the beam ismoved side to side up to 2 m/s, enabling three-dimensional images to be traced out in a display application. Structure in the laser beam is associated with the trapping phenomenon, but the fundamental mechanism for stability of the trap remains mysterious. Particles prefer regions of the beam with diffraction features such as those that arise from spherical aberration. Nevertheless, the ability of near-unidirectional light, albeit light that undergoes focusing and exhibits structure, to provide a restoring force to trapped particles in the direction opposite to beam propagation needs to be explained. Through repeated trials of capturing particles in a well characterized beam, we map out the preferred locations for particle capture and correlate them with diffraction features of the beam. The specific beam locations that host trapped particles, when compared with neighboring regions that do not, can offer insight into the stability mechanism. We analyze the Poynting vector in the vicinity of trapped particles. The flow of light energy can provide important clues into the trapping mechanism.
We have previously presented a novel spatial light modulator appropriate for use in transparent, flat-panel holographic display applications. Our architecture consists of an anisotropic leaky-mode coupler and integrated Bragg reflection grating as a monolithic device implemented in lithium niobate and is fabricated using direct femtosecond laser writing techniques. In this paper, we present a methodology for the experimental characterization of holographically-reconstructed point spread functions from sample devices.
Photophoresis can stably hold opaque microscopic particles in a laser focus surrounded by room air with strength sufficient to enable centimeter-scale patterns to be drawn by sweeping the laser beam. The resulting images rely on visual persistence as laser light scatters from the particle, which is rapidly swept through the 3-D pattern. Control can be maintained while moving the particle with air speeds up to 2 m/s. A desire to greatly increase the sweep speed motivates a re-examination of the fundamentals of photophoresis-based laser-particle traps. Most explanations offered are qualitative, with differing opinions as to whether, for example, asymmetric heating or asymmetric thermal accommodation is primarily at work. Which particles become trapped in the beam is typically based on self-selection, as a variety of particles with possible differing shapes and sizes are offered to the laser focus for capture. Characteristics that make some particles preferred over others are especially relevant. There is broad consensus that structure in the laser focus greatly aids in stable trapping. Nevertheless, it is still possible for even a relatively smooth TEM00 beam to capture and hold particles. Moreover, even in a structured focus (i.e. with aberrations and local intensity minima and maxima), questions remain as to exactly how a particle becomes stably trapped in certain beam locations. A zoomed-in look at trapped particles reveals oscillations or orbits with excursions over tens of microns and accelerations up to 10 gs. We trapped particles in zero-gravity as well as 2-g environments with no noticeable difference in stability.
We have previously introduced a monolithic, integrated optical platform for transparent, flat-panel holographic displays suitable for near-to-eye displays in augmented reality systems. This platform employs a guided-wave acousto-optic spatial light modulator implemented in lithium niobate in conjunction with an integrated Bragg-regime reflection volume hologram. In this paper, we depict analysis of three key system attributes that inform and influence the display system performance: 1) single-axis diffraction-driven astigmatism, 2) strobed illumination to enforce acousto-optic grating stationarity, and 3) acousto-optically driven spatial Nyquist rate.
Near-to-eye holographic displays act to directly project wavefronts into a viewer’s eye in order to recreate 3-D scenes for augmented or virtual reality applications. Recently, several solutions for near-to-eye electroholography have been proposed based on digital spatial light modulators in conjunction with supporting optics, such as holographic waveguides for light delivery; however, such schemes are limited by the inherent low space-bandwidth product available with current digital SLMs. In this paper, we depict a fully monolithic, integrated optical platform for transparent near-to-eye holographic display requiring no supporting optics. Our solution employs a guided-wave acousto-optic spatial light modulator implemented in lithium niobate in conjunction with an integrated Bragg-regime reflection volume hologram.
Waveguide holography refers to the use of holographic techniques for the control of guided-wave light in integrated optical
devices (e.g., off-plane grating couplers and in-plane distributed Bragg gratings for guided-wave optical filtering).
Off-plane computer-generated waveguide holography (CGWH) has also been employed in the generation of simple field
distributions for image display. We have previously depicted the design and fabrication of a binary-phase CGWH operating
in the Raman-Nath regime for the purposes of near-to-eye 3-D display and as a precursor to a dynamic, transparent
flat-panel guided-wave holographic video display. In this paper, we describe design algorithms and fabrication techniques
for multilevel phase CGWHs for near-to-eye 3-D display.
We give a summary of the progress we have made in the fabrication of guided wave devices for use in holographic video displays. This progress includes identifying anisotropic leaky-mode modulators as a platform for holographic display, the development of a characterization apparatus to extract key parameters from leaky-mode devices, and the identification of optimized waveguide parameters for frequency-controlled color display.
The MIT Mark IV holographic display system employs a novel anisotropic leaky-mode spatial light modulator that allows for the simultaneous and superimposed modulation of red, green, and blue light via wavelength-division multiplexing. This WDM-based scheme for full-color display requires that incoming video signals containing holographic fringe information are comprised of non-overlapping spectral bands that fall within the available 200 MHz output bandwidth of commercial GPUs. These bands correspond to independent color channels in the display output and are appropriately band-limited and centered to match the multiplexed passbands and center frequencies in the frequency response of the mode-coupling device. The computational architecture presented in this paper involves the computation of holographic fringe patterns for each color channel and their summation in generating a single video signal for input to the display. In composite, 18 such input signals, each containing holographic fringe information for 26 horizontal-parallax only holographic lines, are generated via three dual-head GPUs for a total of 468 holographic lines in the display output. We present a general scheme for full-color CGH computation for input to Mark IV and furthermore depict the adaptation of the diffraction specific coherent panoramagram approach to fringe computation for the Mark IV architecture.
This article [Opt. Eng.. 52, (5 ), 055801 (2013)] was originally published on 7 May 2013 with an error in Eq. (1). The second plus sign was corrected to a minus sign, as it appears below:
W(x,s)=∫ ∞ −∞ U(x+x ′ 2 )U ∗ (x−x ′ 2 )e −j2πx ′ s dx ′ . (1).
Minor grammatical corrections were also made.
The paper was corrected online on 8 May 2013. It appears correctly in print.
The Authors
An optical architecture for updatable photorefractive polymer-based holographic displays via the direct fringe writing of computer-generated holograms is presented. In contrast to interference-based stereogram techniques for hologram exposure in photorefractive polymer (PRP) materials, the direct fringe writing architecture simplifies system design, reduces system footprint and cost, and offers greater affordances over the types of holographic images that can be recorded. This paper reviews motivations and goals for employing a direct fringe writing architecture for photorefractive holographic imagers, describes our implementation of direct fringe transfer, presents a phase-space analysis of the coherent imaging of fringe patterns from spatial light modulator to PRP, and presents resulting experimental holographic images on the PRP resulting from direct fringe transfer.
We have previously introduced an architecture for updatable photorefractive holographic display based around direct fringe writing of computer-generated holographic fringe patterns. In contrast to interference-based stereogram techniques for hologram exposure in photorefractive polymer (PRP) materials, the direct fringe writing architecture simplifies system design, reduces system footprint and cost, and offers greater affordances over the types of holographic images that can be recorded. In this paper, motivations and goals for employing a direct fringe writing architecture for photorefractive holographic imagers are reviewed, new methods for PRP exposure by micro-optical fields generated via spatial light modulation and telecentric optics are described, and resulting holographic images are presented and discussed. Experimental results are reviewed in the context of theoretical indicators for system performance.
A holographic television system, featuring realtime incoherent 3D capture and live holographic display is used for experiments
in depth perception. Holographic television has the potential to provide more complete visual representations,
including latency-free motion parallax and more natural affordances for accommodation. Although this technology has potential
to improve realism in many display applications, we investigate benefits in uses where direct vision of a workspace
is not possible. Applications of this nature include work with hazardous materials, teleoperation over distance, and laparoscopic
surgery. In this study, subjects perform manual 3D object manipulation tasks where they can only see the
workspace through holographic closed-circuit television. This study is designed to compare performance at manual tasks
using holographic television compared to performance with displays that mimic 2D, and stereoscopic television.
We have previously introduced the Diffraction Specific Coherent Panoramagram - a multi-view holographic stereogram
that provides correct visual accommodation as well as smooth motion parallax with far fewer views than a normal stereogram.
This method uses scene depth information to generate directionally-varying wavefront curvature, and can be computed
at interactive rates using off-the-shelf graphics processors. In earlier work we used z-buffer information associated
with parallax views rendered from synthetic graphics models; in this paper we demonstrate the computation of Diffraction
Specific Coherent Panoramagrams of real-world scenes captured by cameras.
Image-based holographic stereogram rendering methods for holographic video have the attractive properties of moderate
computational cost and correct handling of occlusions and translucent objects. These methods are also subject to the
criticism that (like other stereograms) they do not present accommodation cues consistent with vergence cues and thus
do not make use of one of the significant potential advantages of holographic displays. We present an algorithm for the
Diffraction Specific Coherent Panoramagram -- a multi-view holographic stereogram with correct accommodation cues,
smooth motion parallax, and visually defined centers of parallax. The algorithm is designed to take advantage of parallel
and vector processing in off-the-shelf graphics cards using OpenGL with Cg vertex and fragment shaders. We introduce
wavefront elements - "wafels" - as a progression of picture element "pixels", directional element "direls", and holographic
element "hogels". Wafel apertures emit controllable intensities of light in controllable directions with controllable
centers of curvature, providing accommodation cues in addition to disparity and parallax cues. Based on simultaneously
captured scene depth information, sets of directed variable wavefronts are created using nonlinear chirps, which
allow coherent diffraction of the beam across multiple wafels. We describe an implementation of this algorithm using a
commodity graphics card for interactive display on our Mark II holographic video display.
Horizontal-parallax-only holographic stereograms of nearly SDTV resolution (336 pixels by 440 lines by 96 views) of
textured and normal-mapped models (500 polygons) are rendered at interactive rates (10 frames/second) on a single
dual-head commodity graphics processor for use on MIT's third-generation electro-holographic display. The holographic
fringe pattern is computed by a diffraction specific holographic stereogram algorithm designed for efficient
parallelized vector implementation using OpenGL and Cg vertex/fragment shaders. The algorithm concentrates on lightfield
reconstruction by holographic fringes rather than the computation of the interferometric process of creating the
holographic fringes.
The novel frequency-multiplexed modulator architecture of the MIT Mark III holo-video display poses a significant
challenge in generation of appropriate video signals. Unlike in our previous work, here it is necessary to generate a
group of adjacent single-sideband RF signals; as this display is intended to be manufacturable at consumer-electronics
prices we face the added requirement of compact and inexpensive electronics that are compatible with standard PC
graphics processors. In this paper we review the goals and architecture of Mark III and then describe our experiments
and results in the use of a hardware/software implementation of Weaver's single-sideband modulation method to upconvert
six 200MHz baseband analog video signals to a set of RF signals covering a nearly contiguous 1GHz range. We
show that our method allows efficient generation of non-overlapping signals without aggressive filtering.
We introduce a new holo-video display architecture ("Mark III") developed at the MIT Media Laboratory. The goal of
the Mark III project is to reduce the cost and size of a holo-video display, making it into an inexpensive peripheral to a
standard desktop PC or game machine which can be driven by standard graphics chips. Our new system is based on
lithium niobate guided-wave acousto-optic devices, which give twenty or more times the bandwidth of the tellurium
dioxide bulk-wave acousto-optic modulators of our previous displays. The novel display architecture is particularly designed
to eliminate the high-speed horizontal scanning mechanism that has traditionally limited the scalability of Scophony-
style video displays. We describe the system architecture and the guided-wave device, explain how it is driven
by a graphics chip, and present some early results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.