PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9495, including the Title Page, Copyright information, Table of Contents, Introduction, Authors, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution we propose the use of a liquid lens (LL) to perform three-dimensional (3D) imaging. Our proposed method consists on inserting the LL at the aperture stop of telecentric microscopes. The sequential depth images of 3D samples are obtained by tuning the focal length of LL. Our experimental results demonstrate that fast-axial scanning of microscopic images is obtained without varying neither the resolution capability nor the magnification of the imaging system. Furthermore, this non-mechanical approach can be easily implemented in any commercial optical microscope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called “Edge-based DFD display” and “Deep DFD display”, have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autostereoscopic three-dimensional display technologies using novel optical imaging systems based on retro-reflection with mirror arrays, a dihedral corner reflector array (DCRA) and a roof mirror array (RMA) are described. The proposed methods can generate a low-distortion aerial 3-D image with high numerical aperture on the basis of retro-reflection imaging. As the examples of 3-D displays based on retro-reflective imaging, a multi-view stereoscopic display using a DCRA and a volumetric display using a RMA were described. The multi-view stereoscopic display can achieve not only aerial image formation of display images but also that of the pupils of projectors around viewing position using a DCRA. This feature is effective in keeping consistency between accommodation and convergence cues for stereoscopic display. The volumetric display using a RMA can generate a 3-D image with natural depth information by light points are arranged in a 3-D volume using relatively simple optical configuration. This method can provide natural perception of depth and accessibility to an image. Experimental demonstrations of the generation of floating autostereoscopic images are presented to verify the validity of our proposed methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As advanced display technology has been developed, much attention has been given to flexible panels. On top of that, with the momentum of the 3D era, stereoscopic 3D technique has been combined with the curved displays. However, despite the increased needs for 3D function in the curved displays, comparisons between curved and flat panel displays with 3D views have rarely been tested. Most of the previous studies have investigated their basic ergonomic aspects such as viewing posture and distance with only 2D views. It has generally been known that curved displays are more effective in enhancing involvement in specific content stories because field of views and distance from the eyes of viewers to both edges of the screen are more natural in curved displays than in flat panel ones. For flat panel displays, ocular torsions may occur when viewers try to move their eyes from the center to the edges of the screen to continuously capture rapidly moving 3D objects. This is due in part to differences in viewing distances from the center of the screen to eyes of viewers and from the edges of the screen to the eyes. Thus, this study compared S3D viewing experiences induced by a curved display with those of a flat panel display by evaluating significant subjective and objective measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the accommodation responses for integral photography still images were measured. The experimental results showed that the accommodation responses for integral photography images showed a linear change with images showing the depth position of integral photography, even if the integral photography images were located out of the depth of the field. Furthermore, the discrimination of depth perception, which relates to a blur effect in integral photography images, was subjectively evaluated for the examination of its influence on the accommodation response. As a result, the range of the discrimination of depth perception was narrow in comparison to the range of the rectilinear accommodation response. However, these results were consistent according to the propensity of statistical significance for the discrimination of depth perception in the out range of subjectively effective discriminations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Even though various research has examined the factors that cause visual discomfort in watching stereoscopic 3D video, the brightness factor has not been dealt with sufficiently. In this paper, we analyze visual discomfort under various illumination conditions by considering eye-blinking rate and saccadic eye movement. In addition, we measure the perceived depth before and after watching 3D stereoscopic video by using our own 3D depth measurement instruments. Our test sequences consist of six illumination conditions for background. The illumination is changed from bright to dark or vice-versa, while the illumination of the foreground object is constant. Our test procedure is as follows: First, the subjects are rested until a baseline of no visual discomfort is established. Then, the subjects answer six questions to check their subjective pre-stimulus discomfort level. Next, we measure perceived depth for each subject, and the subjects watch 30-minute stereoscopic 3D or 2D video clips in random order. We measured eye-blinking and saccadic movements of the subject using an eye-tracking device. Then, we measured perceived depth for each subject again to detect any changes in depth perception. We also checked the subject’s post-stimulus discomfort level, and measured the perceived depth after a 40-minute post-experiment resting period to measure recovery levels. After 40 minutes, most subjects returned to normal levels of depth perception. From our experiments, we found that eye-blinking rates were higher with a dark to light video progression than vice-versa. Saccadic eye movements were a lower with a dark to light video progression than viceversa.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simulator which can test a supermultiview condition is introduced. It allows to view two adjacent view images for each eye simultaneously and display patched images appearing at the viewing zone of a contact-type multiview 3-D display. The accommodation and vergence test with an accommodometer reveals that viewers can verge and accommodate even to the image at 600 mm and 2.7 m from them when a display screen/panel is located at 1.58 m from them. The verging and accommodating distance range is much more than the range 1.3 m ~ 1.9 m determined by the depth of field of the viewers. Furthermore, the patched images also provide a good depth sense which can be better than that from individual view images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a growing interest in target detection and tracking in Anti-Access/Area Denial (A2AD) environments, where sensor platforms are at low altitudes and imagery are collected at oblique angles. Targets that are of interest in these scenarios are typically partially or mostly occluded by foliage or other objects. We present experiments to illustrate reconstruction of obscured targets using Integral Imaging, in both synthetically generated data and data collected using a multi-sensor system. We also explore the effects of Integral Imaging on Aided Target Recognition (AiTR), as well as performance improvement on target tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose the fusion between two concepts that are very successful in the area of 3D imaging and sensing. Kinect technology permits the registration, in real time, but with low resolution, of accurate depth maps of big, opaque, diffusing 3D scenes. Our proposal consists on transforming the sampled depth map, provided by the Kinect technology, into an array of microimages whose position; pitch and resolution are in good accordance with the characteristics of an integral- imaging monitor. By projecting this information onto such monitor we are able to produce 3D images with continuous perspective and full parallax.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electrostatic driven 2D MEMS scanners resonantly oscillate in both axes leading to Lissajous trajectories of a digitally modulated laser beam reflected from the micro mirror. A solid angle of about 0.02 is scanned by a 658nm laser beam with a maximum repetition rate of 350MHz digital pulses. Reflected light is detected by an APD with a bandwidth of 80MHz. The phase difference between the scanned laser light and the light reflected from an obstacle is analyzed by sub-Nyquist sampling. The FPGA-based electronics and software for the evaluation of distance and velocity of objects within the scanning range are presented. Furthermore, the measures to optimize the Lidar accuracy of about 1mm and the dynamic range of up to 2m are examined. First measurements demonstrating the capability of the system and the evaluation algorithms are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information capacity of a lossless image-forming system is a conserved property determined by two imaging parameters – the resolution and the field of view (FOV). Adaptive optics improves the former by manipulating the phase, or wavefront, in the pupil plane. Here we describe a homologous approach, namely adaptive field microscopy, which aims to enhance the FOV by controlling the phase, or defocus, in the focal plane. In deep tissue imaging, the useful FOV can be severely limited if the region of interest is buried in a thick sample and not perpendicular to the optic axis. One must acquire many z-scans and reconstruct by post-processing, which exposes tissue to excessive radiation and is also time consuming. We demonstrate the effective FOV can be substantially enhanced by dynamic control of the image plane. Specifically, the tilt of the image plane is continuously adjusted in situ to match the oblique orientation of the sample plane within tissue. The utility of adaptive field microscopy is tested for imaging tissue with non-planar morphology. Ocular tissue of small animals was imaged by two-photon excited fluorescence. Our results show that adaptive field microscopy can utilize the full FOV. The freedom to adjust the image plane to account for the geometrical variations of sample could be extremely useful for 3D biological imaging. Furthermore, it could facilitate rapid surveillance of cellular features within deep tissue while avoiding photo damages, making it suitable for in vivo imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A compact integral three-dimensional (3D) imaging device for capturing high resolution 3D images has been developed that positions the lens array and image sensor close together. Unlike the conventional scheme, where a camera lens is used to project the elemental images generated by the lens array onto the image sensor, the developed device combines the lens array and image sensor into one unit and makes no use of a camera lens. In order to capture high resolution 3D images, a high resolution imaging sensor and a lens array composed of many elemental lenses are required, and in an experimental setup, a CMOS image sensor circuit patterned with multiple exposures and a multiple lens array were used. Two types of optics were implemented for controlling the depth of 3D images. The first type was a convex lens that is suitable for compressing a relatively large object space, and the second was an afocal lens array that is suitable for capturing a relatively small object space without depth distortion. The objects captured with the imaging device and depth control optics were reconstructed as 3D images by using display equipment consisting of a liquid crystal panel and a lens array. The reconstructed images were found to have appropriate motion parallax.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently we introduced the notion of "perceivable light field" (PLF) as an efficient tool for the analysis and design of three dimensional (3D) displays. The PLF is used with a 3D display analysis approach that puts the viewer in the center of the model; that is, first the human visual system requirements are defined through the PLF and then they are back-propagated to the display devices to evaluate its specifications. Here we use such an analysis to evaluate the information requirements that autostereoscopic 3D display devices need to provide for ideal visual conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this keynote address paper, we present an overview of our previously published work on the application of pattern recognition techniques and integral imaging for human gesture recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Usual problem in 3D integral-imaging monitors is flipping that happens when the microimages are seen from neighbor microlenses. This effect appears when, at high viewing angles, the light rays emitted by any elemental image are not passing through the corresponding microlens. A usual solution of this problem is to insert and a set of physical barriers to avoid this crosstalk. In this contribution we present a pure optical alternative of physical barriers. Our arrangement is based on Köhler illumination concept, and avoids that the rays emitted by one microimage to impinge the neighbor microlens. The proposed system does not use additional lenses to project the elemental images, so no optical aberrations are introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report a high-speed parallel phase-shifting digital holography system using a special-purpose computer for image reconstruction. Parallel phase-shifting digital holography is a technique capable of single-shot phase-shifting interferometry. This technique records information of multiple phase-shifted holograms required for calculation of phase-shifting interferometry with a single shot by using space-division multiplexing. This technique needs image-reconstruction process for a huge amount of recorded holograms. In particular, it takes a long time to calculate light propagation based on fast Fourier transform in the process and to obtain a motion picture of a dynamically and fast moving object. Then we designed a special-purpose computer for accelerating the image-reconstruction process of parallel phase-shifting digital holography. We developed a special-purpose computer consisting of VC707 evaluation kit (Xilinx Inc.) which is a field programmable gate array board. We also recorded holograms consisting of 128 × 128 pixels at a frame rate of 180,000 frames per second by the constructed parallel phase-shifting digital holography system. By applying the developed computer to the recorded holograms, we confirmed that the designed computer can accelerate the calculation of image-reconstruction process of parallel phase-shifting digital holography ~50 times faster than a CPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Crosstalk in the contact-type multiview 3-D images is not an effective parameter of defining the quality of 3-D images. This is because the viewing zone in the contact-type multiview 3-D displays allows viewing the images which are composed of an image piece from each view image in a predefined set of consecutive view images, except the part along the viewing zone cross section. However, this part cannot guarantee to view individual view images separately because the viewing region of each view image is contacted to its neighboring viewing regions through a point for each neighbor due to its diamond like shape. Furthermore, the size of each view region can be smaller than the viewers’ pupil sizes as the pixel size decreases and/or the number of view images increases as in super-multiview imaging. The crosstalk has no meaning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we use information from the light field to obtain a distribution map of the wavefront phase. This distribution is associated with changes in refractive index which are relevant in the propagation of light through a heterogeneous or turbulent medium. Through the measurement of the wavefront phase from a single shot, it is possible to make the deconvolution of blurred images affected by the turbulence. If this deconvolution is applied to light fields obtained by plenoptic acquisition, the original optical resolution associated to the objective lens is restored, it means we are using a kind of superresolution technique that works properly even in the presence of turbulence. The wavefront phase can also be estimated from the defocused images associated to the light field: we present here preliminary results using this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
2D-MEMS scanners for the deflection of Laser light in two directions are used to illuminate a measurement volume within 40° in horizontal and vertical direction. This solid angle of about 0.02 is scanned by a 658nm Laser beam with a maximum repetition rate of 350MHz digital pulses with an intensity of about 50mW. Reflected light is detected through an objective by an APD with a bandwidth of 80MHz. The phase difference between the scanned Laser light and the light reflected from an object is analyzed by sub-Nyquist sampling allowing the calculation of its distance and velocity. Presently, the achieved accuracy of the system is between 5mm and 10mm and the measurement range is about 2m. The experimental set-up of the Lidar system is presented in detail and first measurements demonstrating the capability of the system are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FTV (Free-viewpoint Television) is the ultimate 3DTV with an infinite number of views and ranks as the top of visual media. It enables users to view 3D scenes by freely changing the viewpoint. MPEG has been developing FTV standards since 2001. MVC (Multiview Video Coding) is the first phase of FTV, which enables efficient coding of multiview video. 3DV (3D Video) is the second phase of FTV, which enables the efficient coding of multiview video and depth data for multiview displays. Views in between linearly arranged cameras are synthesized from the multiview video and depth data in 3DV. Based on recent development of 3D technology, MPEG has started the third phase of FTV, targeting super multiview and free navigation applications. This new FTV standardization will achieve more flexible camera arrangement, more efficient coding and new functionality. Users can enjoy very realistic 3D viewing and walkthrough/ fly-through experience of 3D scenes in the super multiview and free navigation applications of FTV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To develop a 3D display which can show true 3D images is very important and necessary. Holography has great potential to achieve the objective because holography can actually reconstruct the recorded object in space by reconstruction of wavefront. Further, computer generated hologram (CGH) is used to solve the major issue of conventional holography, which means the recoding process is quite complicated and needs the real objects. The reconstructed image, however, will be blurred and with the unexpected light if using only one phase-only spatial light modulator (PSLM). Although to use two PSLMs by dual-phase modulation method (DPMM) can modulate the phase and the amplitude information simultaneously to enhance the quality of the reconstructed image, it is hard to use in practical application because of the extremely high accurate calibration of the two PSLMs. Therefore, double phase hologram (DPH) was proposed to use only one PSLM to modulate the phase and the amplitude information simultaneously to make the reconstructed image be more focused and eliminate the unexpected light.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a technique to generate an elemental image array to match display devices for three dimensional integral imaging. Experimental results show that our technique can be used to accurately match different display formats and improve the display results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an overview of a color image authentication scheme via multispectral photon-counting (MPCI) double random phase encoding (DRPE). The MPCI makes image sparse distributed and DRPE lets image be stationary white noise which make intruder attacks difficult. In this method, the original RGB image is down-sampled into Bayer image and then be encrypted with DRPE. The encrypted image is photon-counted and transmitted on internet channel. For image authentication, the decrypted Bayer image is interpolated into RBC image with demosaicing algorithm. Experimental results show that the decrypted image is not visually recognized under low light level but can be verified with nonlinear correlation algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventionally, the 3D object reconstructed from computational integral imaging technique consists of focused and off-focused areas. The reconstructed object with off-focused points will affect the high-level image analysis such as object classification, recognition, and tracking. Therefore, it is necessary to develop a method to remove the off-focused points for further object analysis. For each point in 3D space, we assume that its intensity values on all the 2D elemental images captured with integral imaging system are similar. Consequently, each focused point on reconstructed depth slice image will share the sample points on elemental images with similar intensity values while the sample points of each off-focused point will have large varied intensity values. If the variance of these sample points on elemental images is larger than a pre-defined threshold, the corresponding point on the reconstructed depth slice image can be estimated as the off-focused point and removed. However, each point on the reconstructed image doing the similar processing sequentially will make the computation burden, especially to multiple depth slice images reconstruction. In this paper, we overview a method to reconstruct the multiple depth slice images with only focused parts in parallel using graphic processing unit (GPU). Experimental results show that this method can reconstruct the multiple depth slice images without off-focused points in a much faster speed on GPU than that on CPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Meta-surface offers an innovative approach to manipulate light with anomalous capabilities. We discuss the possibility of inserting a specially designed gradient meta-surface into the pixel architecture of the liquid crystal on silicon (LCOS) for the purpose of optimizing the diffraction efficiency of LCOS-based holography. The pixels in LCOS with feature size approaching the order of visible light wavelength could provide large diffraction angle, unfortunately, scaling down the pixel size would reduce the efficiency of the first diffraction we desired. The metal-insulator-metal (MIM) structure served as the unit cell of meta-surface consists of three layer, i.e., the subwavelength metal nanobrick with varying geometrical parameter and the continuous metal film separated by the insulator layer. A linear phase gradient is exhibited by the unit cells in each pixel period. When illuminated by a polarized incident light, the MIM structure, where a magnetic resonance is created at a particular frequency, can offer an anomalous reflection with high-efficiency and acts as a flat blazed grating. Finally, the light are supposed be diverted to the desired first diffraction. The properties of potential metal, such as Au, Ag, and Al, served as the plasmonic material and suitable insulator have been studied to configure the MIM structure accurately. Investigations are numerically carried out to observe the effects on the distribution of liquid crystal director with TechWiz Software and to obtain the relative diffraction efficiency by using FDTD software. Compared with the conventional LCOS device, the optimization of the diffraction efficiency has been achieved by our proposed structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase is an inherent characteristic of any wave field. Statistics show that greater than 25% of the information is encoded in the amplitude term and 75% of the information is in the phase term. The technique of phase retrieval means acquire phase by computation using magnitude measurements and provides data information for holography display, 3D field reconstruction, X-ray crystallography, diffraction imaging, astronomical imaging and many other applications. Mathematically, solving phase retrieval problem is an inverse problem taking the physical and computation constraints. Some recent algorithms use the principle of compressive sensing, such as PhaseLift, PhaseCut and compressive phase retrieval etc. they formulate phase retrieval problems as one of finding the rank-one solution to a system of linear matrix equations and make the overall algorithm a convex program over n × n matrices. However, by "lifting" a vector problem to a matrix one, these methods lead to a much higher computational cost as a result. Furthermore, they only use intensity measurements but few physical constraints. In the paper, a new algorithm is proposed that combines above convex optimization methods with a well known iterative Fourier transform algorithm (IFTA). The IFTA iterates between the object domain and spectral domain to reinforce the physical information and reaches convergence quickly which has been proved in many applications such as compute-generated-hologram (CGH). Herein the output phase of the IFTA is treated as the initial guess of convex optimization methods, and then the reconstructed phase is numerically computed by using modified TFOCS. Simulation results show that the combined algorithm increases the likelihood of successful recovery as well as improves the precision of solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The phase carries details of the depth information about an optical wave field and is very important in many applications, such as optical field reconstruction and 3D display. However, optical waves oscillate too fast for detectors to record the intensity and phase directly and simultaneously. The phase retrieval technology or algorithm has been the focus of enormous research recently. Among the valuable algorithms transport-of-intensity equation (TIE) and angular-spectrum- iteration (ASI) are widely used in various fields such as electron microscopy and x-ray imaging. Unfortunately, the former one is originally derived for a coherent illumination and can not be directly applied to the phase retrieval of partially coherent light field when not been uniformly lit. While the ASI deducted from wave propagating with wave vector has itself shortcomings due to iterative uncertainty and slow convergence. In this paper, a novel hybrid phase retrieval algorithm extended TIE for partially coherent light illuminations is investigated in both case of uniformly and non-uniformly lit. This algorithm consists of multi-plane ASI to utilize the physical constraints between the object domain and the spectral domain, and the relationship between the intensity and phase among the wave propagation. The phase at the center image plane is calculated from three intensity images. Then this result is treated as the initial value of the multi-plane ASI. Finally, the phase information at the object plane is acquired according the reversibility of the optical path. This hybrid algorithm expands the application of tradition TIE while improving the convergence rate of ASI method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we will extend the traditional TIE setup of phase retrieval of a phase object through axial translation of the CCD by employing a tunable lens (TL-TIE). This setup is also extended to a 360° tomographic 3D reconstruction through multiple illuminations from different angles by rotating the phase object. Finally, synchronization between the CCD, and the tunable lens is employed using a reconfigurable hardware to automate the 3D 360° tomographic reconstruction process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows the method to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial light field camera, and shows the results of color reconstruction of the synthesized CGHs. The CGH calculation using light field camera is performed by converting four-dimensional light field captured with a light field camera into a complex amplitude distribution, and the converted complex amplitude distribution is propagated so as to generate an interference pattern. In color reconstruction, we calculated three CGHs with red, green and blue wavelengths and superposed reconstructed red, blue and green images to obtain reconstructed color images. We verified that color three-dimensional images were reconstructed by numerical and optical reconstructions of the synthesized CGHs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In classical compressive holography (CH), which based on the Gabor holography setup, two nonlinear terms are inherent in the intensity recorded by a 2D detector arrays, the DC term and the squared field term. The DC term (the term at the origin) can be eliminated by filtering the Fourier transform of the interference irradiance measurements using appropriate high-pass filter near the zero frequency. The nonlinearity caused by the squared field term can be neglected and modeled as a error term in the measurement. However, the above assumptions are significantly limited, which yields the degradation of reconstruction quality. In this paper, an novel scheme using phase-shifting method is presented. To accurately recover the complex optical field caused by the propagation of the object, without the influence of the DC term and the squared field term, a very effective method for removing these two terms is introduced. The complex optical field of the 3D object and the complex optical field at the detector plane can be precisely represented by a linear mapping model. The complex optical field at the recorder plane is obtained by phase-shifting interferometry with multiple shots. Then, the corresponded complex optical field at the detector plane can be successfully extracted from multiple captured holograms using conventional four phase-shifting interferometry. From such complex optical field at the record plane, including the amplitude and phase information, the complex optical field of the 3D object can be reconstructed via an optimization procedure. Numerical results demonstrate the effectiveness of our proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A super multi-view display provides three-dimensional images by emitting a lot of rays of different colors depending on the direction from each point on the display. It provides smooth motion parallax without special glasses, and it is expected that the observer is free from the visual fatigue caused by the accommodation-vergence conflict. However, a huge number of pixels are required on a display device because high-density rays are required for good quality images and each ray needs corresponding pixel. We proposed a new method to reduce the required number of pixels by limiting rays emitted to only around observer’s pupils. The display is based on the lenticular method. As stated above, the rays should be shot out to only around observer’s pupils. Therefore, the lenticular lens of which viewing zone angle is narrowed is used. But, due to the characteristics of the lenticular lens, the same image is seen repeatedly from different positions out of the viewing zone. It is called side lobe. Because of the side lobes, the rays for one eye enter the other eye. To suppress these side lobes, we proposed the lenticular lens is illuminated by the directional light. The direction of the directional light has to be changed to follow the observer’s eye. We implemented optical designs based on the technique as mentioned above, and we produced a prototype display. We experimented with consideration of change of viewing angle and viewing distance. We confirmed usefulness of the proposed display by these experiment result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the process and research that went into creating a set of 3D models to characterize a golf swing. The purpose of this work is to illustrate how a 3D scanner could be used for assessing athlete performance in sporting applications. In this case, introductory work has been performed to show how the scanner could be used to show the errors a golfer made in a swing. Multiple factors must be taken into account when assessing golfers’ swings including the position and movement of the golfer’s hands, arms, and foot placement as well as the position of the club head and shaft of the golf club.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objectives of this study were 1) to evaluate the accuracy of 3D Scanners for measuring body volume index of the subject, and 2) to apply the body volume index of the subject for using as wellness assessment purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Equipment to enjoy a 3D image, such as a movie theater, television and so on have been developed many. So 3D video are widely known as a familiar image of technology now. The display representing the 3D image are there such as eyewear, naked-eye, the HMD-type, etc. They has been used for different applications and location. But have not been widely studied for the transparent 3D display. If transparent large 3D display is realized, it is useful to display 3D image overlaid on real scene in some applications such as road sign, shop window, screen in the conference room etc. As a previous study, to produce a transparent 3D display by using a special transparent screen and number of projectors is proposed. However, for smooth motion parallax, many projectors are required. In this paper, we propose a display that has transparency and large display area by time multiplexing projection image in time-division from one or small number of projectors to active screen. The active screen is composed of a number of vertically-long small rotate mirrors. It is possible to realize the stereoscopic viewing by changing the image of the projector in synchronism with the scanning of the beam.3D vision can be realized by light is scanned. Also, the display has transparency, because it is possible to see through the display when the mirror becomes perpendicular to the viewer. We confirmed the validity of the proposed method by using simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the issues in holographic display is the presence of the zeroth order and the twin image, which degrade quality of reconstructed objects. A common solution is to use an off-axis configuration. However, the spatial separation of the three contributions imposes constraints on the resolution and the size of holograms that can be displayed. In addition, the spatial light modulators (SLM) available present limitations in term of resolution and fill factor. Recently, different methods have been proposed to display complex information and therefore get rid of the twin image. One approach is to use a grating to combine the real and imaginary parts of the holographic data. It requires only one SLM, but the resolution is low as the SLM is divided in two to display the two components of the data. The grating period that should be used also strongly depends on the wavelength and the hologram size. As a result, the tolerance of the system is very low. Another method is to combine two SLMs. In this study, we used a polarizing beam splitter and a wave-plate to exploit the polarization properties of the light and combine the wavefronts coming from two SLMs. One was used to display the hologram while the second compensated the background noise coming from the diffusion of the input light by the pixels and the intrinsic periodic structure of the SLM. A key point is to align precisely the two SLMs to optimize the noise reduction without losing the object's information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As one of popular immersive Virtual Reality (VR) systems, stereoscopic cave automatic virtual environment (CAVE) system is typically consisted of 4 to 6 3m-by-3m sides of a room made of rear-projected screens. While many endeavors have been made to reduce the size of the projection-based CAVE system, the issue of asthenopia caused by lengthy exposure to stereoscopic images in such CAVE with a close viewing distance was seldom tangled. In this paper, we propose a light-weighted approach which utilizes a convex eyepiece to reduce visual discomfort induced by stereoscopic vision. An empirical experiment was conducted to examine the feasibility of convex eyepiece in a large depth of field (DOF) at close viewing distance both objectively and subjectively. The result shows the positive effects of convex eyepiece on the relief of eyestrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers the impact of lighting and attire on the performance of a previously created low-cost 3D scanning system. It considers the effect of adjusting the lighting configuration and of the subject’s clothing on the quality of the scans and the number and types of objects that can be scanned. The experimentation performed tested different types (colors and textures) of clothing to assess which produced the best scans and multiple lighting configurations. This paper presents the results from this experimentation and, from this, make generalizations about optimizing visible light scanner performance before concluding with a discussion of scanner efficacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visible light 3D scanning offers the potential to non-invasively and nearly non-perceptibly incorporate 3D imaging into the everyday world. This paper considers the various possible uses of visible light 3D scanning technology. It discusses multiple possible usage scenarios including in hospitals, security perimeter settings and retail environments. The paper presents a framework for assessing the efficacy of visible light 3D scanning for a given application (and compares this to other scanning approaches such as those using blue light or lasers). It also discusses ethical and legal considerations relevant to real-world use and concludes by presenting a decision making framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.