The eye-box expansion method using the merging of waveguide and HOE (holographic optical element) is presented. Using the waveguide with the refractive index of 1.7, the wide FoV (field of view) that is up to 60° is achieved. Full color and wide FoV are obtained using 2 waveguides. Projection optical system based on Scheimpflug principle is proposed and designed to compensate large-scale off-axis HOE aberrations. In order to enhance image quality, the projection system is precisely simulated and the grating pitch and alignment are calculated to increase the eye-box and uniformity.
The coherent backlight unit (C-BLU) using a diffractive optical element (DOE) for full-color flat-panel holographic display is proposed. The coherent backlight unit is composed of two diffractive optical elements (DOEs) that are imprinted on the same glass substrate. The illumination area of the backlight is 250 mm x 130 mm and the thickness is 2.2 mm, which is slim compared to other conventional coherent backlight units for holographic display systems. In experiments, the total efficiency is measured as 0.8% at red (638 nm), 3.9% at green (520nm), and 3.4% of blue (473 nm). As a result, we could get the 10 inch full color holographic display with 4k resolution.
The novel design for the compact augmented reality (AR) glasses that utilize holographic optical element (HOE) as a combiner is presented. The wide field of view (FoV) that is larger than 90°, full color and high contrast ratio (CR) are achieved based on the single layer HOE, which has the thickness of 25 μm. In order to implement compactness of AR glasses using HOE combiner, the combination of optical lenses is proposed. In this design, a chromatic aberration and astigmatism, which are caused by highly off-axis projection of the image onto HOE, and the precise wavefront reproduction that maximize the efficiency of the HOE are taken into account simultaneously. The geometrical image distortion is corrected by implementation of image pre-distortion algorithm. The interpupillary distance (IPD) adjustment is applied to compensate small eye box. Based on the design, wearable prototype is introduced. Through the experiments both on benchtop and prototype, at the distance of 2 m, large image with diagonal of 150 inches is displayed.
We propose the coherent backlight unit (BLU) using Holographic Optical Element (HOE) for full-color flat-panel holographic display. The HOE BLU consists of two reflective type HOEs that change the optical beam path and shape by diffraction. The diverging incident beam is transformed to the collimated beam which has a very small diffraction angle (7.5°) by HOE 1 (H1) in order to illuminate the whole display. This collimated beam is converged to a point at a distance from the glass substrate by HOE 2 (H2). As a result, the diverging incident beam is converted to a point light by H1 and H2. When the high resolution Spatial Light Modulator (SLM) displaying Computer Generated Hologram (CGH) is illuminated by HOE BLU, the hologram image is displayed at a view point near focal point. Practically, we fabricated the full color HOE BLU for 5.5" flat panel holographic display by using the proposed design. At least 5.5" size of HOE is required to illuminate the whole panel. For this reason, we recorded 150 mm x 90 mm size HOE on the 10 mm thickness glass substrate. This HOE BLU exhibits a total efficiency of 8.0% at Red (660 nm), 7.7% at Green (532 nm), 3.2% at Blue (460 nm) using optimized recording conditions for each wavelength. Finally, a bright full color hologram image was achieved.
When the ultrasound wave propagates the human body, its velocity and attenuation change at each region, which make the PSF shape different. To solve the PSF estimation problem is ill-posed case and rarely error free which produces the PSF estimation errors and make the image overblurred by the sidelobe artifacts. For the commercialization of the ultrasound deconvolution method, the robustness of the image deconvolution without artifacts is essential. There exist many minimum variance beamformer algorithms. It is robust to noise and shows high resolution efficiently. We consider the channel data as image pixel and we present a new spatial varying MV (minimum variance) blending scheme with the deconvolved imageges in the image processing domain. With a stochastic image blending of the deconvolution images, we obtain high resolution results which suppress the blur artifacts enough although the input deconvolution images have restoration errors. We verify our algorithm on the real data. In all the case, we can observe that the artifacts are suppressed and show the highest resolution among the deconvolution methods.
This paper proposes realistic fetus skin color processing using a 2D color map and a tone mapping function (TMF) for ultrasound volume rendering. The contributions of this paper are a 2D color map generated through a gamut model of skin color and a TMF that depends on the lighting position. First, the gamut model of fetus skin color is calculated by color distribution of baby images. The 2D color map is created using a gamut model for tone mapping of ray casting. For the translucent effect, a 2D color map in which lightness is inverted is generated. Second, to enhance the contrast of rendered images, the luminance, color, and tone curve TMF parameters are changed using 2D Gaussian function that depends on the lighting position. The experimental results demonstrate that the proposed method achieves better realistic skin color reproduction than the conventional method.
In this paper, an inversion-free subpixel rendering method that uses eye tracking in a multiview display is proposed. The
multiview display causes an inversion problem when one eye of the user is focused on the main region and the other eye
is focused on the side region. In the proposed method, the subpixel values are rendered adaptively depending on the eye
position of the user to solve the inversion problem. Also, to enhance the 3D resolution without the color artifact, the
subpixel rendering algorithm using subpixel area weighting is proposed instead of the pixel values. In the experiments,
36-view images were seen using active subpixel rendering with the eye tracking system in a four-view display.
KEYWORDS: Printing, CMYK color model, Color reproduction, Nonimpact printing, Visualization, Image analysis, Color management, Laser based displays, Image contrast enhancement, Color difference
The same image on the display and color printer isn't the same. Firstly, this is due to the bit depth difference for
representing the color of a pixel. The display uses the color data of the eight or more bits, but the color printer uses just
1bit for representing color of a pixel. So, the display can reproduce smoother image than the color printer. Secondly, the
display gamut is larger than the printer gamut, so the display color is brighter and more saturate than the printer color.
For minimizing the problems due to these differences, many halftoning and gamut mapping techniques have been
developed. For the gamut mapping, color management standard organization, ICC, recommended 2 gamut mapping
methods, HPMINDE and SGCK. But the recommended methods by ICC have some weak points; contouring
(HPMINDE), paled pure color reproduction (SGCK) and too reddish hair color reproduction (HPMINDE, SGCK). This
paper introduces a gamut mapping method that can reproduce smooth gradation, pure colors with high saturation and
natural hair color. The proposed method is developed for optimal reproduction of graphic image, and it also gives good
results for pictorial image.
KEYWORDS: Distortion, Printing, Calibration, Visualization, CMYK color model, RGB color model, Spectrophotometry, Color difference, Data modeling, Nonimpact printing
When time, temperature or an external environment change, a laser electrophotographic printer produces quite different color tones from original ones. To achieve consistent color reproduction, many researchers have tried to characterize printer tone curves and developed methods to correct color tones. Color channel independent methods are most widely used, and there are two approaches in color channel independent method: (1) Instrument-based correction and (2) visual correction. Two approaches provide some trade-offs between cost and accuracy. In this paper we propose a methodology which combines the strengths of these two approaches. We describe how we design a calibration page and how we characterize lightness variation of a reference patch. We then present the procedure of our global tone correction method based on visual appearance match of end-users as well as the predetermined reference lightness model. We simulate tone distortion state by varying hardware parameters, and perform visual appearance match experiments to subjects. Our experimental results show that our method can significantly reduce color difference between the original print and the print at the distortion state. This suggests that we can reliably estimate the distortion parameter, and correct tones close to an original state.
This paper proposes an adaptive error diffusion algorithm for text enhancement followed by an efficient text segmentation that uses the maximum gradient difference (MGD). The gradients are calculated along with scan lines, then the MGD values are filled within a local window to merge text segments. If the value is above a threshold, the pixel is considered as potential text. Isolated segments are then eliminated in a non-text region filtering process. After the text segmentation, a conventional error diffusion method is applied to the background, while edge enhancement error diffusion is used for the text. Since it is inevitable that visually objectionable artifacts are generated when using two different halftoning algorithms, gradual dilation is proposed to minimize the boundary artifacts in the segmented text blocks before halftoning. Sharpening based on the gradually dilated text region (GDTR) then prevents the printing of successive dots around the text region boundaries. The method is extended to halftone color images to sharpen the text regions. The proposed adaptive error diffusion algorithm involves color halftoning that controls the amount of edge enhancement using a general error filter. However, edge enhancement unfortunately produces color distortion, as edge enhancement and color difference are trade-offs. The multiplicative edge enhancement parameters are selected based on the amount of edge sharpening and color difference. Plus, an additional error factor is introduced to reduce the dot elimination artifact generated by the edge enhancement error diffusion. In experiments, the text of a scanned image was sharper when using the proposed algorithm than with conventional error diffusion without changing the background.
This paper proposes a color decomposition method for a multi-primary display (MPD) using a 3-dimensional look-up-table (3D-LUT) in linearized LAB space. The proposed method decomposes the conventional three primary colors into multi-primary control values for a display device under the constraints of tristimulus matching. To reproduce images on an MPD, the color signals are estimated from a device-independent color space, such as CIEXYZ and CIELAB. In this paper, linearized LAB space is used due to its linearity and additivity in color conversion. First, the proposed method constructs a 3-D LUT containing gamut boundary information to calculate the color signals for the MPD in linearized LAB space. For the image reproduction, standard RGB or CIEXYZ is transformed to linearized LAB, then the hue and chroma are computed with reference to the 3D-LUT. In linearized LAB space, the color signals for a gamut boundary point are calculated to have the same lightness and hue as the input point. Also, the color signals for a point on the gray axis are calculated to have the same lightness as the input point. Based on the gamut boundary points and input point, the color signals for the input point are then obtained using the chroma ratio divided by the chroma of the gamut boundary point. In particular, for a change of hue, the neighboring boundary points are also employed. As a result, the proposed method guarantees color signal continuity and computational efficiency, and requires less memory.
KEYWORDS: Color difference, Printing, RGB color model, Visualization, CMYK color model, Eye, Color imaging, Diffusion, Spectrophotometry, Image quality
This paper proposes an improved six-color separation method that reduces the graininess in middle tone regions based on the standard deviation of lightness and chrominance in SCIELAB space. Graininess is regarded as the visual perception of the fluctuation of the lightness of light cyan and cyan or light magenta and magenta. In conventional methods, granularity is extremely heuristic and inaccurate due to the use of a visual examination score. Accordingly, this paper proposes an objective method for calculating granularity for six-color separation. First, the lightness, redness-greenness,
and yellowness-blueness of SCIELAB space is calculated, reflecting the spatial-color sensitivity of the human eye and the sum of the three standard deviations normalized. Finally, after assigning the proposed granularity to a lookup table, the objective granularity is applied to six-color separation , thereby reducing the graininess in middle tone regions.
The current paper proposes an illuminant estimation algorithm that estimates the spectral power distribution of an incident light source using its chromaticity determined based on the perceived illumination and highlight method. The proposed algorithm is composed of three steps. First, the illuminant chromaticity of the global incident light is estimated using a hybrid method that combines the perceived illumination and highlight region. Second, the surface spectral reflectance is then recovered from the image after decoupling the global incident illuminant for each channel. The surface spectral reflectance calculation is limited to the maximum achromatic region (MAR), which is the most achromatic and brightest region in the image, and estimated using the principal component analysis (PCA) method along with a set of given Munsell samples. Third, the closest colors are selected from a spectral database composed of reflected-lights generated by the given Munsell samples and a set of illuminants. Finally, the illuminant of the image is calculated using the average spectral distributions of the reflected-lights selected for the MAR region and its average surface reflectance. Simulations were performed using artificial color-biased images and the results confirmed the accuracy of the estimates produced by the proposed method for various illuminants.
This paper proposes a vector error diffusion method for smear artifact reduction in the boundary region. This artifact mainly results from a large accumulation of quantization errors. In particular, color bands with a smear artifact the width of a few pixels appear along the edges. Accordingly, to reduce this artifact, the proposed halftoning process excludes the large accumulated quantization error by comparing the vector norms and vector angles between the error-corrected vector and eight primary color patches. When the vector norm of the error corrected vector is larger than those of eight primary color patches, the quantization error vector is excluded from the quantization error distribution process. In addition, the quantization error is also excluded when the angle between eight primary color patches and error-corrected vector is large. As a result, the proposed method enables a visually pleasing halftone pattern to be generated by all three color separations into account in a device-independent color space and reduces smear artifact in the boundary regions.
This paper proposes a gamut mapping algorithm based on color space division for color reproduction of cross media. As each color device has a limited range of producible colors, the reproduced colors on a destination device are different from those of the original device. In order to reduce the color difference between those devices, the proposed method divides the whole gamut into parabolic shapes based on intersecting lightness by the “just noticeable difference” (JND) and the boundary of original gamut. By dividing the gamut with parabolic shapes and piecewise mapping of each region, it not only considers gamut characteristics but also provides for mapping uniformity. The lightness variations are more sensitive to the human visual system and by using lightness JND it can restrict lightness mapping variations that are unperceivable. As a result, the proposed algorithm is able to reproduce high quality color images using low-cost color devices.
In this paper, we propose an adaptive stereo matching algorithm to treat stereo matching problems in projective distortion regions. Since the disparities in the projective distortion region can not be estimated in terms of fixed- size block matching algorithm, an adaptive window warping method with hierarchical matching process is used to compensate perspective distortions. In addition, a probability model, based on the statistical distribution of matched errors and constraint functions, is adopted to handle the uncertainty of matching points. Since the proposed window warping process is based on a statistical window warping step with the reliability estimation of matching points, any relaxation process need not to use. As a result, overall processing time is reduced, compared with conventional stereo matching algorithm including a relaxation step, and improved matching results are obtained. Experimental results on both disparity map and 3D model view show that the proposed matching algorithm is effective for various images, even if the image has projective distortion regions and repeated patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.