The wavefront distortions in a digitally recorded HOE which has the property of a spherical mirror is measured with a Shack-Hartmann wave front sensor to estimate its functional performance. The performance is compared with those in the reconstructed image from Fresnel zone patterns which are displayed on DMDs of different pixel shapes, and with an analog type HOE and a Spherical mirror. The distortion distributions from these samples are not too different from each other. All are revealing spherical mirror properties, though the mirror quality is not the same. This informs that the HOE really work as a spherical mirror. However, the focused beam size of the HOE is much bigger than those of other samples.
A three-dimensional model was generated from an object picked-up by an RGB-D camera, and the threedimensional model generated was arranged in a computer. For the three-dimensional model, the image was picked-up by the two-dimensional camera array in which the fixation viewpoint was set, and the multi-view stereoscopic image was taken. In the setting of the picking-up parameter of the multi-view stereoscopic image, the region which minimizes the spatial distortion in the two-dimensional camera array was calculated beforehand. The pixel position conversion was carried out on the multi-view stereoscopic image taken by the calculated parameter, and the element image was generated on the LCD, and the three-dimensional model of the outside object by integral photography was able to be displayed.
A design concept of a goggle type HMD (Head Mount Display) which is capable of controlling automatically user’s interocular distance is introduced. A linear motor is hired for each of the left and right pupillary distance control based on the measurement of the interocular distance with a micro-camera located at the top of the microprojector for each eye. A half mirror for each eye is used to connect the projector/camera to a corresponding eye. Each camera measures its corresponding eye’s pupil with a high accuracy under the illumination of infrared light located at near the camera. The distance range of the controlling is 55 mm to 75 mm. The maximum travelling distance of each linear motor with the four optical components is 10 mm.
The reconstructed image from digital holography are laden with many distortions. The main cause of these distortions is known as the finite size of pixels in the display panel/chip. Due to this finite size, the starting position of the reconstructed rays in each pixel can be any place in the pixel. Hence the starting position can be different from the recording beam position which is usually considered as the center of each pixel. This difference makes that the reconstructed rays are no longer the phase conjugated rays of their corresponding recording rays. The reconstructed rays are somewhat distorted in their wavefronts. To estimate these wavefront distortions, a Shack-Hartmann wavefront sensor is used in the pathway of the reconstructed beam. The phase distribution obtained with the sensor reveal that the distortion is more for the bigger pixel size and for the images with more reconstructed image points as expected. This result indicates that the sensor is a reasonable method of estimating the distortions in the reconstructed image. The same sensor is also used to estimate the functional performance of holographic optical elements for image projection.
Multi-view stereoscopic images were produced via the pick-up method with respect to the object set in the computer, and integral photography was generated from this multi-view stereoscopic image. When the multiview stereoscopic image was taken by pick-up, the optical axis of each camera of the array was aligned with one point in the front of the camera array. A calculation method was derived with respect to the depth position and width of the object displayed by integral photography generated using this method. Based on the derived calculation method, consideration was given to the distortion displayed and reproduced with respect to the depth position and width of the object in the prototyped integral photography.
The accommodation and convergence responses in a light field display which can provide up to 8 images to each eye of viewers are investigated. The DOF (Depth of Field) increase with the increasing number of projected images is verified for both monocular and binocular viewings. 7 subjects with eye sights greater than 1.0 reveal that their responses can match to their real object responses as the number of images increased to 7 and more, though there are distinctive differences between objects. The matching performance of the binocular is more stable than that of the monocular viewing for the number of images less than 6. But the response stability of the accommodation increases as the number becomes more than 7.
Color moirés induced at contact-type multiview three-dimensional and light-field imaging are reviewed, slanted color moirés are introduced, and the reason why they become invisible as the slanting angle increases is explained. The color moirés in the imaging are induced by the structural uniqueness of the imaging, i.e., viewing zone-forming optics (VZFO) on the display panel. The moirés behave differently from those by the beating effect. They are (1) basically chirped, (2) their fringe numbers and phases are also varying according to the changes in viewer’s viewing positions and viewing angles at a given viewing distance, (3) the pattern period of the VZFO is at least more than several times that of the pixel pattern, and (4) they are colored. The color moirés can hardly be eliminated because they are induced structurally, but they can be minimized by either reducing the regularity of the pixel pattern using a diffuser between the panel and the VZFO or aligning VZFO’s pattern to have a certain slanting angle with the pixel pattern in the panel.
A simulator which can test the visual perception response of light field displays is introduced. The simulator can provide up to 8 view images to each eye simultaneously to test the differences between different numbers of different view images in supermultiview condition. The images are going through a window with 4 mm width, which is located at the pupil plane of each eye. Since each view image has its own slot in the window, the image is separately entring the eye without overlapping with other images. The simulator shows that the vergence response of viewers' eyes for an image at a certain distance is closer to the real object of the same distance for 4 views than 2 views. This informs that the focusable depth range will increase more as the the number of different view images increases.
An aperture sharing camera to acquire multiview images are introduced. The camera is built with a mirrorless camera and a high speed LC shutter array which is located at the entrance pupil of the camera’s objective, to divide the pupil into a number of sections with an equal dimension, The LC shutters in the array is opened one at a time in synchronizing with the camera shutter. The images from neighboring shutters reveal a constant disparity between them. The disparity between the images from the camera matches closely with that calculated from theory and is proportional to the distance of the each LC shutter from the camera’s optical axis.
The resolution of the reconstructed image from a hologram displayed on a DMD is measured with the light field images along the propagation direction of the reconstructed image. The light field images reveal that a point and line image suffers a strong astigmatism but the line focusing distance differences for lines with different directions. This will be astigmatism too. The focusing distance of the reconstructed image is shorter than that of the object. The two lines in transverse direction are resolved when the gap between them is around 16 pixels of the DMD’s in use. However, the depth direction is difficult to estimate due to the depth of focus of each line. Due to the astigmatism, the reconstructed image of a square appears as a rectangle or a rhombus.
3D Display is generally designed to show 3D stereoscopic image to viewer at the center position of the display. But,
some interactive 3D technology needs to interact with multiple viewers and each stereoscopic image such as an imaging
demonstration. In this case, the display panel on the table is more convenient for multiple viewers. In this paper, we
introduce the table-top stereoscopic display that has the potential to combine this interactive 3D technology. This display
system enables two viewers to see the other images simultaneously on the table-top display and each viewer to
stereoscopic images on it. Also, this display has first optical sheet to make multiple viewers see each image and second
optical sheet to make them see stereoscopic images. We use a commercial LCD display, design the first optical sheet to
make two viewers see each image, and design the second optical sheet to make each viewer see each stereoscopic image.
The viewing zone from our display system is designed and easy to be viewed from children to adult to look at three
dimensional stereoscopic images very well. We expect our 3D stereoscopic display system on the table-top can be
applied for the interactive 3D display applications in the near future.
The central and side viewing zones of pixel cell and elemental image based contact-type multiview 3-D imaging
methods, can be combined with between viewing zone to form a bigger viewing zone, i.e., a combined viewing zone.
The combined viewing zone of the elemental image based method has the same features as the viewing region in front of
the viewing zone cross section in that of the pixel cell based method. The combined viewing zone of the pixel cell based
method has almost 2 times of the viewing regions of viewing differently composed images with a pixel from each of
pixel cells/elemental images in the display panel. The front and behind viewing regions in the pixel cell based method's
combined viewing zone has a symmetrical relationship. The light intensity distribution supports these is facts.
Generally non-glass type three dimensional stereoscopic display systems should be considering human factor. Human
factor include the crosstalk, motion parallax, types of display, lighting, age, unknown aspects around the human factors
issues and user experience. Among these human factors, the crosstalk is very important human factor because; it reduces
3D effect and induces eye fatigue or dizziness. In these reason, we considered method of reduction crosstalk in three
dimensional stereoscopic display systems. In this paper, we suggest method of reduction crosstalk using lenticular lens.
Optical ray derived from projection optical system, converted to viewing zone shape by convolution of two apertures. In
this condition, we can minimize and control the beam width by optical properties of lenticular lens (refractive, pitch,
thickness, radius of curvature) and optical properties of projector (projection distance, optical features). In this
processing, Gaussian distribution type shape is converted to rectangular distribution type shape. According to the beam
width reduction will be reduce crosstalk, and it was verified used to lenticular lens.
This paper addresses the registration and the fusion techniques between passive millimeter wave (MMW) and visual
images for concealed object detection. The passive MMW imaging system detects concealed objects such as metal and
man-made objects as well as small liquid and gel containers. The registration and fusion processes are required to
combine information from both of visual and MMW images. The registration process is composed of feature extraction
and matching stages. The body areas in two images are adjusted to each other in scale, rotation, and location. The image
fusion method is based on discrete wavelet transform and a fusion rule, which emphasizes the person's identity and the
hidden object together. The experimental and simulation results show the proposed technique can detect a concealed
object and fuse two different types of images in a fully automated way.
A focal plane detector array in a millimeter wave imaging system can be used to acquire multiview
images in millimeter wave band. Two focal plane detectors which are distanced 8mm are used to obtain a
stereoscopic image pair of a scene. The pair reveals a good depth sense though its resolution is very low
and enables to estimate distances of objects in the scene with a reasonable accuracy.
Keywords: millimeter wave imaging system, parabolic antenna, stereoscopic image pair, focal plane
detector array, depth sense, object distance.