Disparity estimation is a highly complex and time consuming process in multiview video encoders. Since multiple views taken from a two-dimensional camera array need to be coded at every time instance, the complexity of the encoder plays an important role besides its rate-distortion performance. In previous papers we have introduced a new frame type called the D (derived) frame that exploits the strong geometrical correspondence between views, thereby reducing the complexity of the encoder. By employing the D frames instead of some of the P frames in the prediction structure, significant complexity gain can be achieved if the threshold value, which is a keystone element to adjust the complexity at the cost of quality and/or bit-rate, is selected wisely. A new adaptive method to calculate the threshold value automatically from existing information during the encoding process is presented. In this method, the threshold values are generated for each block of each D frame to increase the accuracy. The algorithm is applied to several image sets and 20.6% complexity gain is achieved using the automatically generated threshold values without compromising quality or bit-rate.
Disparity estimation is a highly complex and time consuming process in multi-view video encoders. Since multiple views taken from a 2D camerea array need to be coded at every time instance, the complexity of the encoder plays an important role besides its rate-distortion performance. In previous papers we have introduced a new frame type called D frame that exploits the stron geometrical correspondence between views, thereby reducing the complexity of the encoder. By employing D frames instead of some of the P frames in the prediction structure, significant compexity gain can be achieved if the trhreshold value which is a keystone element to adjust the complexity at the cost of quality and/or bit-rate is selected wisely. In this work, a new adaptive method to calculate the threshold value automatically from existing information during the encoding process is presented. In this method, the threshold values are generated for each block of each D frame to increase the accuracy. The algorithm is applied to several image sets and 20.6% complexity gain is achieved using the automatically generated threshold values without compromising qaulity or bit-rate.
Micromirrors are a typical example of Micro-Electromechanical Systems (MEMS) with many applications including
optical scanners, optical switching, projection displays, etc. We have succeeded in producing MEMS micromirrors in a
SiGe structural layer, which can be used to realize CMOS-integrated MEMS structures. Several pixel designs were
simulated using COMSOL multiphysics and subsequently verified in hardware. They differ in mirror size, hinge length
and number of attracting electrodes (two or four). One particular mirror design enables variable Pulse Width Modulation
(PWM) addressing. In this design, the mirror switches between two extreme states with a variable duty cycle determined
by two generic high voltage signals and two CMOS-compatible pixel-specific DC voltages applied to the four attracting
electrodes. The processed arrays were subjected to Laser Doppler Vibrometer (LDV) measurements in order to verify the
simulation results. The simulated and measured pull-in voltages are compared for 8, 10 and 15μm mirrors. The
agreement between simulation and measurement lies within the expectations, which is an encouraging result for future
Proc. SPIE. 7690, Three-Dimensional Imaging, Visualization, and Display 2010 and Display Technologies and Applications for Defense, Security, and Avionics IV
KEYWORDS: Light emitting diodes, Modulation, Visualization, Liquid crystal on silicon, Projection systems, Digital micromirror devices, 3D displays, LED displays, 3D visualizations, 3D image processing
LED-based projection systems have several interesting features: extended color-gamut, long lifetime, robustness
and a fast turn-on time. However, the possibility to develop compact projectors remains the most important
driving force to investigate LED projection. This is related to the limited light output of LED projectors
that is a consequence of the relative low luminance of LEDs, compared to high intensity discharge lamps. We
have investigated several LED projection architectures for the development of new 3D visualization displays.
Polarization-based stereoscopic projection displays are often implemented using two identical projectors with
passive polarizers at the output of their projection lens. We have designed and built a prototype of a stereoscopic
projection system that incorporates the functionality of both projectors. The system uses high-resolution liquidcrystal-
on-silicon light valves and an illumination system with LEDs. The possibility to add an extra LED
illumination channel was also investigated for this optical configuration. Multiview projection displays allow the
visualization of 3D images for multiple viewers without the need to wear special eyeglasses. Systems with large
number of viewing zones have already been demonstrated. Such systems often use multiple projection engines.
We have investigated a projection architecture that uses only one digital micromirror device and a LED-based
illumination system to create multiple viewing zones. The system is based on the time-sequential modulation
of the different images for each viewing zone and a special projection screen with micro-optical features. We
analyzed the limitations of a LED-based illumination for the investigated stereoscopic and multiview projection
systems and discuss the potential of a laser-based illumination.
We present compact illumination engines for DMD projection systems making use of light emitting diodes (LEDs) as light sources. The impact of uniformization optics and color-combining dichroic filters is investigated with respect to the color uniformity on the screen. PhlatLight LEDs are considered as light sources because of their superior luminance levels. Also PhotonVacuum optics are used to collimate and transform the emitted LED light distribution. The optical engines are simulated with advanced non-sequential ray tracing software. They are evaluated on the basis of étendue efficiency, compactness and color uniformity of the projected images. Color plots are used as tools to investigate the simulated color gradients in the image. To validate our simulation models, we have built a compact prototype LED projector. Its color-related specifications are compared with the simulated values.
We present two multiview rear projection concepts that use only one projector with a digital micromirror device
light modulator. The first concept is based on time sequentially illuminating the light modulator from different
directions. Each illumination direction reflects on the light modulator toward a different viewing zone. We
designed an illumination system that generates all distinct illumination beams and a lens system integrated
into the projection screen to enlarge the viewing angles. The latter is crucial since the viewing extent of the
viewing zones decreases inversely proportional to the size of the projected image. A second concept is based on
a specific projection screen architecture that steers images into different horizontal directions. In this way, the
entire acceptance ´etendue of the projection system can be used for every image. This is achieved by moving a
double-sided lenticular sheet horizontally with respect to a sheet of microlenses with a square footprint. Both
concepts are investigated with advanced optical simulations.
Disparity estimation can be used for eliminating redundancies between different views of an object or a scene recorded
by an array of cameras which are arranged both horizontally and vertically. However, estimation of the disparity vectors
is a highly time consuming process which takes most of the operation time of the multi-view video coding. Therefore,
either the amount of data that is to be processed or the complexity of the coding method needs to be decreased in order to
encode the multi-view video in a reasonable time. It is proven that the disparities of a point in the scene photographed by
cameras which are spaced equidistantly are equal. Since there is a strong geometrical correlation of the disparity vectors,
the disparity vector of a view can for most blocks be derived from the disparity vector of another view or views. A new
algorithm is presented that reduces the amount of processing time needed for calculating the disparity vectors of each
neighboring view except the principal ones. Different schemes are proposed for 3x3 views and they are applied to
several image sequences taken from a camera-array. The experimental results show that the proposed schemes yield
better results than the reference scheme while preserving the image quality and the amount of encoded data.