Open Access Paper
2 June 1999 Applications of a liquid crystal television used as an arbitrary quasi-phase modulator
Vincent Laude, Carine Dirson, Dominique Delautre, Sebastien Breugnot, Jean-Pierre Huignard
Author Affiliations +
Proceedings Volume 10296, 1999 Euro-American Workshop Optoelectronic Information Processing: A Critical Review; 1029606 (1999) https://doi.org/10.1117/12.365916
Event: Euro-American Workshop on Optoelectronic Information Processing, 1999, Colmar, France
Abstract
Liquid-crystal televisions are inexpensive display devices that can be used as arbitrary quasi-phase modulators to achieve arbitrary wavefront shapes, these are limited only by the available modulation depth and resolution. We discuss the properties of these devices and then demonstrate four applications of a particular liquid-crystal television: an active lens system, programmable optical image processing experiments, the resolution enhancement of an image sensor, and the measurement of the sensitivity of heterodyne detection to wavefront aberrations.

1

INTRODUCTION

There are many situations in optics that can be adequately described with a complex transmittance acting on a wavefront.1,2 For instance, the imaging characteristics of an optical system can be simply described by the optical transfer function (OTF) in a pupil plane, diffraction phenomena are usually expressed using transforms of a complex 2D function, and so on. For some applications it would be interesting to be able to modify at will the characteristics of the complex transmittance by making it programmable, at least in the vicinity of an operating point. With a single spatial light modulator (SLM) it is generally not possible to control simultaneously the amplitude and the phase, but rather a combination of both. Anyway, control of the phase is certainly the most desirable, and will be considered solely from now.

Takaki et al.35 have proposed and demonstrated experimentally an active lens system. They used a phase-only liquid-crystal SLM attached to a thin lens, and demonstrated several different programmable functions, including image shifting and focus control. This concept is also closely related to programmable diffractive functions written onto SLMs.610 These are few examples among the many possibilities that an ideal phase SLM would offer.

There exists several possibilities for the phase SLM, among which the liquid-crystal technology is the most widespread at the moment. Pure-phase modulation can be obtained in the optical birefringence mode using nematic liquid-crystals, which assume that the molecular directors are parallel throughout the liquid-crystal cell depth.11 However, inexpensive liquid-crystal televisions (LCTVs) that are designed for display applications usually employ twisted-nematic liquid-crystals, and hence always provide a coupled amplitude and phase characteristics.12 Hopefully, an approximate pure-phase modulation can be obtained under certain experimental arrangements.9,10,13

In this paper, we gather together some applications of one particular LCTV used as a quasi-phase modulator that were previously published independently.1417 Our purpose is mainly to emphasize that many programmable phase functions can be implemented with profit even with an imperfect phase modulator. In section 2, we describe the LCTV used, and more generally we present topics specific to pixelated SLMs, like diffraction effects, and sampling and quantization issues. In section 3, we discuss an active lens system, before using the same set-up for optical image processing experiments in section 4 and the implementation of a resolution-enhancement algorithm in section 5. In section 6, the LCTV is used to measure the sensitivity of coherent heterodyne detection to perturbations.

2

LIQUID-CRYSTAL TELEVISION CHARACTERISTICS

2.1

Amplitude and phase modulation

The SLM we have used is of the twisted-nematic type and is normally designed for display applications. It has 640 × 480 pixels, with a pitch of 40 μm and 56% fill factor. It is driven with VGA signals directly from a computer board. The principle of amplitude and phase modulation with such a twisted-nematic LCTV has been discussed in many papers recently.913,1820 The amplitude and phase modulation with nematic LC-SLMs depends mostly on the voltages applied to the pixels, or equivalently on the grey level of the image written onto the SLM. Additional parameters are the angular positions of the two polarizers in the case of twisted-nematic LC-SLMs. Practically, these angular positions can be set either ad hoc in order to obtain the desired amplitude and/or phase modulation, or they can be determined from the Jones matrix of the SLM.

Even though the Jones matrix for a twisted-nematic LC cell cannot be obtained analytically in general, it can be rather easily obtained experimentally,21 using combinations of amplitude and phase modulation measurements for several angular configurations of the polarizers. The Jones matrix assumes the following form15

00338_psisdg10296_1029606_page_3_1.jpg

where A is the amplitude attenuation of the cell, β is a phase retardation, and the real parameters f, g, h and j are arbitrary but related by the normalization relation

00338_psisdg10296_1029606_page_3_2.jpg

The measurement of the amplitude modulation is rather easy and precise, and can be obtained for instance by writing a uniform grey level image onto the SLM and measuring the transmitted intensity.19,12 The phase determination is in essence interferometric, and many different methods have been proposed in the literature. One simple and reliable but very slow method involves a Mach-Zehnder interferometer,18 used to record the displacement of equal inclination fringes between two grey levels. More recent methods employ a Ronchi grating,20 a wedge shear plate9 or Young pinholes.10 In one variant,22 a Ronchi grating is directly written onto the SLM which allows for a very simple and fast implementation. This is the solution that was chosen here.15

Table 1 lists the estimated parameters of the Jones matrix as a function of the gray level.15 From the result of the estimation of the Jones matrix, the best polarizers combination was selected for achieving quasi-phase modulation. The results is shown in Fig. 1. As can be seen, the phase modulation depth does not reach 2π, and spurious amplitude variations are still observable.

Figure 1:

Measured amplitude and phase modulation for the twisted-nematic LCTV for a 632.8 nm wavelength.

00338_psisdg10296_1029606_page_4_1.jpg

Table 1:

Estimated parameters of the Jones matrix for the twisted-nematic LCTV for a 632.8 nm wavelength.

Grey levelfgjhβ (radians)
00.9993160.03340210.01404390.00742470.0
100.9990420.03651540.01768410.01637160.107
200.9982020.04446360.02164960.03387130.126
300.9968950.04867510.03340170.05210410.175
400.9940270.05859290.04826470.07841330.244
500.9892480.05587840.08321840.106490.345
600.9820770.06714280.1134450.1347120.404
700.9772250.02897870.1385740.1580790.491
800.9697810.06884280.1499680.179710.494
900.9642830.05272780.1741330.1925010.569
1000.9567010.04657750.1981980.2080180.649
1100.9419690.06121570.2209350.2452250.685
1200.9323050.01505770.25160.2593810.811
1300.9124930.05502250.2831910.2900560.838
1400.8923370.03356360.3237970.3126710.931
1500.860981-0.07403160.3707030.3403111.12
1600.8339380.01812650.4108250.3680251.107
1700.8034810.02924630.4542530.3836891.178
1800.7599090.02032980.5069340.4063771.215
1900.7146870.01919180.5539810.4265671.282
2000.6601770.03609180.6205350.4216631.351
2100.5439770.02765460.7227550.4253811.479
2200.3904890.02283780.8310940.3953221.676
2300.2719220.0513070.89940.3383871.792
2400.2110780.05621950.9166330.3347681.829
2500.203490.01092350.9282670.3111161.888

2.2

Diffraction properties

A common feature of most commercial liquid-crystal SLMs is that they are pixelated devices that usually follow the addressing specifications of television or computer displays. According to the VGA format for instance, images of 640 × 480 pixels can be displayed at a 60 Hz frame rate from a computer memory, and the technology is evolving rapidly towards larger pixel counts, say more than 1000 × 1000 pixels. The pixelation has two important consequences. The first is that the available resolution is ultimately given by the pixel count.4 The second is that diffraction effects occur because of the physical pixels. Pixels can be represented by an aperture function p(s) identical for all pixels but for a translation. The pixels are centered at points mb, where b is the pixel pitch, and pixel number m has complex amplitude transmission t(m). If there are N pixels, the width of the SLM is Nb. The SLM pupil function can be written

00338_psisdg10296_1029606_page_5_2.jpg

where the summation is over all SLM pixels. Eq. (3) is fundamental for the problem considered, since it accounts for the transition from the discrete representation t(m) of the SLM image, as stored for instance in a computer memory, to its continuous representation f(s), taking into account explicitly sampling effects. Upon defining the Fourier transform by

00338_psisdg10296_1029606_page_5_3.jpg

the Fourier transform of Eq. (3) or point spread function (PSF) is

00338_psisdg10296_1029606_page_5_4.jpg

This last expression allows to separate the influences of the geometrical shape of the pixels from the SLM transmission t(m). Obviously, since the pixels are generally very small, the extension of the function (u) is very broad. The first term in Eq. (5) is a continuous version of the discrete Fourier transform of the discrete image t(m). Indeed, for the sampling points n defined by bu = n/N, and for these points only, the discrete Fourier transform of t(m) is obtained

00338_psisdg10296_1029606_page_5_5.jpg

The function (u) is periodic, with a period 1/b. Since only on-axis diffractive control of the PSF is required, this means that energy is lost in higher diffraction orders. The 0 order extends over −1/2 to +1/2 in the variable bu, and has exactly N resolution cells, i.e. there are as many resolution cells in the PSF as SLM pixels in the pupil plane. The function (u) merely acts as an envelop limiting the energy lost in higher diffraction orders, as depicted in Fig. 2. In the particular case of a uniform image written onto the SLM (t(m) = 1),

00338_psisdg10296_1029606_page_6_1.jpg

Figure 2:

Fourier transform of the SLM pupil function.

00338_psisdg10296_1029606_page_5_1.jpg

The width at half maximum of the function (u) is then approximately Δu = 1/Nb, and is consistent with the fact that there are N resolution cells in the PSF.

2.3

Sampling considerations

The problem of aberration compensation is considered first, i.e. some aberration function g(s) to be compensated for is given. The wavefront after the SLM is

00338_psisdg10296_1029606_page_6_2.jpg

If the aberration function is assumed to be slowly varying at the scale of a pixel of the SLM, then

00338_psisdg10296_1029606_page_6_3.jpg

where the effective SLM transmission defined by teff(m) = t(m)g(mb). The simplification of Eq. (8) to Eq. (9) is only possible if the aberration function complies with the Shannon conditions, i.e. if it does not contain spatial frequencies higher than 1/2b. It can be seen that exact aberration compensation can be obtained if teff(m) = 1 for all pixels, or if t(m) = g−1(mb), for in this case the finest possible PSF is obtained. If the aberration function does not satisfy the Shannon condition, a perfect compensation is not possible, and the simple sampling operation is not generally the optimal approximation of the continuous pupil function; a local averaging of the aberration function inside a SLM pixel might as well be used instead

00338_psisdg10296_1029606_page_6_4.jpg

It can be noted that the raw sampling of Eq. (9) amounts to replacing in Eq. (10) the pixel function p with a Dirac function δ, i.e. assuming that the pixels are very small.

The case of function synthesis is considered now, i.e. a function h(s) is given to be mimicked. One is led back to the problem of aberration compensation if an aberration function is considered that is given by the inverse function h−1(s), i.e. a fictitious aberration function is introduced that has to be compensated for, and this function is exactly the inverse of the function needs to be synthesized. Then the discussion of the sampling rate applies as well, and aberration compensation and function synthesis are seen to be formally equivalent.

2.4

Quantization and chromatism

Continuous pure-phase images are necessary in order to implement exactly a desired phase function. However, only a discrete number of grey levels are usually available with a given display device, and these points have to be used to encode the desired continuous phase image. Dallas23 has given an elegant answer to this quantization problem. Closely related works have also reported on the effects of phase quantization of Fresnel lenses encoded in pixelated SLMs24 and on the effects of nonlinearities in the SLM characteristics on the performance of kinoforms.25

Let φ(m) be the sampled phase image that has to be written onto the SLM for every pixel m. The ideal transmission of the device or pupil function should then be

00338_psisdg10296_1029606_page_7_1.jpg

Only p quantization levels are available, and every phase value has to be replaced by one of these p quantized complex values. The usual way to do this is to use Euclidean projection Ρ[φ], i.e. choose the quantization point that is closest in distance. This Euclidean projection operator is a 2π periodic function of φ, for which a Fourier series expansion can be considered

00338_psisdg10296_1029606_page_7_2.jpg

where the coefficients Gk are defined as

00338_psisdg10296_1029606_page_7_3.jpg

The coefficients Gk once they have been computed allow for the quantization effects to be evaluated. Indeed

00338_psisdg10296_1029606_page_7_4.jpg

The quantized phase image can be seen as a superposition of the desired continuous phase image, with weight G1, with ghosts with weight Gk for k ≠ 1. Energy is spread between ghosts since ΣkGk2 ≤ 1, with equality in case of no amplitude modulation. Tab. 2 gives the intensities of the ghosts for the particular twisted-nematic LC-SLM used. It should be noted that the ∣G12 value determines the maximum efficiency available with a given set of quantization points for implementing a pure-phase SLM image. In this case, the diffraction efficiency is limited to 62 %, but there is almost no energy lost in the undesired ghosts, i.e. the residual amplitude modulation is mostly responsible for the decrease in efficiency.

Table 2:

∣Gk∣2 coefficients for the twisted-nematic LC-SLM used.

k
-3-2-101234
0.00850.01050.00910.00040.62280.00670.00070.0015

Up to now, only monochromatic illumination has been considered. The practically interesting case of temporally incoherent illumination, e.g. as obtained from a white-light source, is considered now. This situation can be described mathematically as a continuous wavelength spectrum centered around λ0, such that all the PSFs created by individual wavelengths add incoherently, i.e. in intensity. Three different chromatism effects can be identified, refractive index chromatism, diffraction chromatism and quantization chromatism. The first is characterized by the wavelength dependence of the refractive index of the liquid-crystal, and is neglected here. The second is associated to diffraction, namely because this depends explicitly on the wavelength. However, diffraction chromatism does not affect the zero order of diffraction. The third is created by quantization effects, and more specifically results from the definition modulo-2π of the phase image written onto the SLM, and is the main cause of error with SLMs. Indeed, if the image is reset to 0 when 2π is reached for wavelength λ0, the phase reached before reset will be higher (resp. lower) for a wavelength smaller than (resp. larger than) λ0. This results in an Euclidean projection given by15

00338_psisdg10296_1029606_page_8_1.jpg

From this expression, the Gk coefficients are easily calculated to be

00338_psisdg10296_1029606_page_8_2.jpg

Fig. 3 shows how the intensity ∣Gk2 of the k-th ghost evolves as a function of wavelength. When λ = λ0 only the first ghost is present, i.e. the desired phase function is perfectly encoded. In the vicinity of λ0, the intensities of the other ghosts increase gradually, and imply a corresponding decrease in the first ghost intensity, but only the k = 0 ghost is truly significant if the relative bandwidth does no not exceed a reasonable value, say 20%.

Figure 3:

Quantization chromatism, that is due to the modulo-2π definition of phase: intensity ∣Gk2 of the k-th ghost as a function of wavelength.

00338_psisdg10296_1029606_page_9_1.jpg

3

ACTIVE LENS

3.1

Principle

A liquid-crystal active imaging system is sketched in Fig. 4. Two doublets corrected from spherical aberration are used to form the image of an object onto a CCD camera.15 The LCTV is placed in between the doublets and defines the pupil of the system. If a constant phase pattern is written onto the SLM, then the image characteristics will not be altered. However if the phase pattern represents a prism, then the image will be shifted transversally which is equivalent to beam steering,26,27 whereas if it represents a thin lens the image will be translated longitudinally, or defocused.2832 Other image forming modifications controlled by certain phase patterns can be imagined as well. It can be noted that such an active imaging system is also closely connected to applications such as aberration compensation and adaptive optics using liquid-crystal.3336

Figure 4:

Principle of the liquid-crystal active lens.

00338_psisdg10296_1029606_page_10_1.jpg

In the experiments, the object is illuminated with either a He-Ne laser (λ = 632.8 nm) or a halogen white-light source. The object is a resolution target illuminated indirectly by a rotating diffuser which is a circular piece of ground glass driven by a rotating engine. The rotating diffuser is used to wash out the speckle when the laser illumination is used. The pixel pitch of the CCD sensor is 11 × 11 μm2. The images shown in Figs. 58 were acquired using a frame grabber with 640 × 480 pixels and 8 bits of resolution (VGA format).

Figure 5:

Horizontal image shifting. The left column shows the phase images written onto the SLM, and the right column shows the experimental images obtained. From top to bottom, 100 μm shift and 400 μm shift.

00338_psisdg10296_1029606_page_11_1.jpg

Figure 6:

Vertical image shifting. The left column shows the phase images written onto the SLM, and the right column shows the experimental images obtained. From top to bottom, 100 μm shift and 400 μm shift.

00338_psisdg10296_1029606_page_11_2.jpg

Figure 7:

Positive focus control. The left column shows the phase images written onto the SLM, and the right column shows the experimental images obtained. From top to bottom, 2% and 5% defocus.

00338_psisdg10296_1029606_page_12_1.jpg

Figure 8:

Negative focus control. The left column shows the phase images written onto the SLM, and the right column shows the experimental images obtained. From top to bottom, −2% and −5% defocus.

00338_psisdg10296_1029606_page_13_1.jpg

3.2

Image steering

The problem of image shifting, or beam steering, is to shift the point spread function from the optical axis to an arbitrary location x0 in the image plane. This can be done without energy loss if the following pure-phase image can be written onto the SLM

00338_psisdg10296_1029606_page_10_2.jpg

where 00338_psisdg10296_1029606_page_10_3.jpg, for then

00338_psisdg10296_1029606_page_10_4.jpg

i.e. the unperturbed point spread function of Eq. (7) is obtained, but shifted to the desired location x0. The maximal resolution available with the SLM, i.e. λz′/Nb is conserved in the shift. It is worthwhile noting that the function (u) is periodic, i.e. the image shifting is identical in every diffraction order and the maximum possible shift is limited by the central diffraction order extent. There is no limitation on the shift v inside the central diffraction order, and moreover the shift is continuous and not restricted to discrete values. The distance between adjacent diffraction orders created by the pixelated structure of the SLM is λz′/b = 4.75 mm in both directions. These dimensions define the useful portion of the image formed on the 2D CCD array. The resolution inside the central diffraction order is exactly 640 × 480 points, or 7.4 μm and 9.9 μm in the x and y directions respectively.

Fig. 5 and 6 show examples of horizontal and vertical image shifting respectively. The higher order diffraction orders that are due to the pixelated nature of the SLM are clearly visible and appear with an energy less than the useful central diffraction order, as expected. The quality of the shifted images degrades as the amount of shift requested increases, or equivalently as the spatial frequency of the “prisms” written onto the SLM increases. Significantly, the quality of image shifting is much higher vertically than horizontally, even though the number of pixels is higher in the horizontal direction (680 pixels) than in the vertical direction (480 pixels). This is a direct consequence of the addressing scheme, i.e. VGA signals, used for displaying the phase image onto the SLM. This limitation is purely technological, and would be relieved if a real pixel by pixel addressing scheme were used. Anyway, it is encouraging to see that a phase-mostly modulation limited to 3π/2 approximately is enough to obtain good quality results. Interestingly, it can be seen that small shifts can be achieved with very few resolution losses and with almost all the energy diffracted correctly at the shifted location, i.e. the energy of the ghosts is small as predicted.

3.3

Focus control

In order to change the longitudinal position of the image plane by a given amount, it suffices to sample a lens function at the SLM pixel rate to obtain a SLM Fresnel lens32

00338_psisdg10296_1029606_page_12_2.jpg

where e measures the amount of defocus, i.e. the image will be in focus at (1 − ϵ)z′ instead of z′. It should be noted however that the above lens function never complies with the Shannon criterion for any sampling rate, so that aliasing is always present. If ∣ϵ∣ becomes very large, then instead of the desired central lens many “secondary lenses” appear at predictable locations and with predictable focal lengths and energies.31 A rough estimation of the range of defocus that is achievable with limited aliasing can be obtained simply.4 Combining image transverse and longitudinal shift can be achieved easily using the following SLM image:

00338_psisdg10296_1029606_page_13_2.jpg

Figs. 7 and 8 show examples of positive and negative focus control respectively. The experimental images were acquired after translation of the CCD camera to obtain the best focus plane. As in the case of image shifting, higher diffraction orders are clearly visible, but the effect of increasing spatial frequencies of the “lenses” written onto the SLM or equivalently increasing defocus is quite different. The resolution of the defocused image seems almost unaltered for large defocuses, but its energy decreases. This can be understood by an energy spreading due to the multiple lenses that appear in a sampled Fresnel lens when a small focal length is requested.31 It can also be seen that the image size increases with negative defocus and decreases with positive defocus, as it should from geometrical optics. The amount of defocus that it is possible to generate with negligible aliasing is limited to the −10% to 8% range with the set-up used.

4

ANALOG IMAGE PROCESSING

4.1

Principle

We propose to make use of the liquid crystal active lens to implement optically a certain number of image processing operations. The main benefit compared to numerical implementations of the same algorithms is the possibility to implement continuous transformations without sampling side-effects. Directional edge extraction can be accomplished by subtracting shifted frames, with the width of the edges varied continuously. Focal length shifting (blurring) can be controlled continuously to yield low-pass filtering. Arbitrary combinations of low-passed and shifted versions of the same image can thus be obtained, and a number of different linear or nonlinear filtering operations can then be achieved. Suppose we form the following operation:

00338_psisdg10296_1029606_page_14_1.jpg

In this equation, f(x,y) denotes the image detected by the CCD camera with no phase pattern written onto the SLM, and ℌf(x, y) the same image but subjected to a combination of shift and defocus. Images f(x, y) and ℌf(x, y) are easily optically formed by our active lens system, and α and β are scalar coefficients for combining these images. The image g(x,y) can be computed very efficiently by a computer, since this is achieved in a single pass with one addition and two multiplications per pixel.

4.2

Experiments

The object image used for all following experiments is shown in Fig. 9. Defocus is well known to be equivalent to low-pass filtering. Alternatively, it can be used to generate a high-pass filter by subtracting to the object a defocused version of itself. This is illustrated by Fig. 9 for two values of defocus.

Figure 9:

First line, 640 × 480 image used for the optical image processing experiments. Second line, experimental result of high-pass filtering. From left to right, the amount of defocus is 1% and 2%.

00338_psisdg10296_1029606_page_15_1.jpg

We first consider directional edge detection. By writing a prism type image onto the SLM, we are able to shift the image on the CCD camera to any position. If the operator ℌ in Eq. (21) represents this shift operation, and taking for example α = − 1 and β = 1, we obtain a directional edge detection, some examples of which are shown in Fig. 10. Now instead of writing a single phase prism onto the SLM, it is possible to split the pupil plane in two parts, and to use for example two oppositely directed prisms. We then get two replicas of the object, shifted symmetrically. Setting α = −2 and β = 2 (half the energy goes in each replica), we obtain a symmetrical directional edge detection, some examples of which are shown in Fig. 11. As a last example, if the pupil plane is split in four parts, with oppositely directed prisms in each part, we get still another kind of directional edge detection, as illustrated in Fig. 12 (α = −4 and β = 4).

Figure 10:

Experimental result of directional edge detection. Shift directions are indicated under the images, and correspond to a 80 μm displacement on the camera.

00338_psisdg10296_1029606_page_15_2.jpg

Figure 11:

Experimental result of directional edge detection obtained by subtracting a duplicated version of the original image to itself. Shift directions are indicated under the images, and correspond to a 80 μm displacement on the camera.

00338_psisdg10296_1029606_page_16_1.jpg

Figure 12:

Experimental result of directional edge detection. The four shift directions are along the diagonals, and correspond to a 80 μm displacement on the camera.

00338_psisdg10296_1029606_page_16_2.jpg

5

RESOLUTION ENHANCEMENT

5.1

Principle

It is well-known2,1 that the resolution of an optical imaging system is ultimately limited by diffraction, and that the smallest observable feature is roughly the size of the Airy diffraction pattern. Super-resolution is usually understood as the possibility of accessing details of the observed object that lie beyond the diffraction limit. However, in practical imaging systems, the pixel size sets another limit to the resolution that in many practical cases is more stringent than the diffraction limit. We restrict our attention to the case when the size of the diffraction pattern is much smaller than the size of the detection pixels, and we try and recover an image with a resolution higher than that of the 2D sensor. It is possible to obtain a higher resolution image from a sequence of frames from the same scene only if some different distortion of the scene occurs before detection and sampling of each frame. Such an operation can be achieved by micro-scanning the detector plane,37,38 i.e. a high resolution image is obtained from low resolution frames of the same scene where each frame is offset by a sub-pixel displacement. In other words, this is a deconvolution problem where the convolution kernel is synthesized by combining the same pixel shape at different locations. The main limitations arise in practice from acquisition noise and the limited accuracy with which the sub-pixel displacements are known.39 Even though in theory either the detector or the projected image can be moved, it is generally faster to steer the projected image. Usual solutions involve steering mirrors or prisms. However, a programmable solution with no moving mechanical parts is certainly desirable.

5.2

Deconvolution filter

If the sensor has N pixels, and a resolution enhancement factor of p is sought, this means that the high resolution image J will have p × N pixels. If the pixel dimension is a in the sensor, the pixel dimension of the high resolution image will be a/p. Let us denote Iq the low resolution image obtained after a sub-pixel displacement of qa/p; Iq contains N pixels. If it is assumed that there is 110 gap between the sensor pixels, then

00338_psisdg10296_1029606_page_17_1.jpg

where n varies between 0 and N − 1. Image Nq accounts for the acquisition noise, which is assumed to be uncorrelated. Next, the images Iq are mixed together to form image I defined by

00338_psisdg10296_1029606_page_17_2.jpg

Figure 13:

Schematic representation of the principle of resolution enhancement using micro-scanning. p = 3 in this example. Three images I0, I1 and I2 of the same scene are acquired with respectively 0, 1/3 and 2/3 sub-pixel displacement. From these data, the high resolution image J must be reconstructed.

00338_psisdg10296_1029606_page_18_1.jpg

In this equation, m varies between 0 and pN − 1, so that I has pN pixels like J. Eq. (22) can then be rewritten as

00338_psisdg10296_1029606_page_18_2.jpg

where the high resolution noise image N was formed from the low resolution noise images Nq using Eq. (23). Note that noise N is uncorrelated since the Nq are assumed uncorrelated. It is clear from Eq. (24) that the mixed image I is the result of the correlation of J with a kernel D such that D(m) = 1 for m = 0, …,p − 1 and D(m) = 0 elsewhere:

00338_psisdg10296_1029606_page_18_3.jpg

Fourier transforming Eq. (25) yields

00338_psisdg10296_1029606_page_18_4.jpg

Written in this form, it is seen that the mixed image I has the same spectral content as the high resolution image j, but filtered by (k). The action of D is clearly to enhance low frequencies and to lower high frequencies, but its most disturbing feature is the disappearance of all the spectral content of J near the zeros of (k), in the vicinity of which only acquisition noise remain. As a consequence, the amount of information lost will depend on the strength of the acquisition noise.

The computation of an estimate Je of the high resolution image J from the data I obtained via micro-scanning has been tackled from different points of view in the literature (see e.g. Refs.39,40 and references therein). For the sake of simplicity, we will use a linear frequency domain approach, in which a deconvolution filter is applied to to yield the estimate:

00338_psisdg10296_1029606_page_18_5.jpg

The Wiener filter is well-known to optimize the signal-to-noise ratio, but requires that the spectral density of J be known at least roughly. We use the following simple filter that depends on a single parameter s that is tuned depending on the strength of the noise:

00338_psisdg10296_1029606_page_18_6.jpg

The underlying idea is to discriminate between spatial frequencies for which useful information is expected, i.e. when the power spectral density ∣(k)∣2 exceeds a certain threshold s2, and frequencies for which almost only noise is expected.

5.3

Experiments

The goal of the experiments was not actually to demonstrate an enhancement of the resolution of the CCD sensor, which was already almost at the diffraction limit, but to show that a sensor with less resolution could be enhanced by a liquid-crystal active lens system.

Fig. 14 shows the scene as seen by the CCD camera and acquired with a frame-grabber at a resolution of 320×240 pixels. This resolution is chosen so as to be twice above the diffraction limit. The scene represents glasses on a tray. Low resolution images were generated by averaging p × p pixels for all required sub-pixel displacements. Experimental results are shown for p = 2 and 4. All images are shown without any post-processing. It can be seen that a resolution enhancement is clearly obtained in both cases, although a mesh-looking structure appears after deconvolution that has exactly a period of p pixels. We attribute this effect to an error in the exact displacements achieved. Errors arise from an imprecise knowledge of the pixel pitches of the SLM and the CCD sensor, and probably more significantly of the focal length of the second lens in our experiment.

Figure 14:

Experimental implementation of resolution enhancement. First line: 320 × 240 image used in the experiments. Left column, from top to bottom: examples of low resolution frames Iq for p = 2 and 4 respectively. Right column, from top to bottom: reconstructed images Je with s2 = 15.

00338_psisdg10296_1029606_page_19_1.jpg

6

SENSITIVITY OF HETERODYNE DETECTION TO DISTORTIONS

6.1

Principle

Heterodyne detection is a powerful technique to recover a small signal in noise, and is broadly used in many optronic sensing devices. It is usually based on the following optical configuration.41 Two frequency-shifted coherent beams are generated; the first one, the local oscillator, is kept in the system as a reference; the second one is used to illuminate a remote target which partially reflects it. A fraction of this backscattered signal is then collected. In order to improve the signal-to-noise ratio, the backscattered signal is demodulated by the local oscillator. The local oscillator and the backscattered signal wavefronts are superimposed by a beam splitter, and then focused on a rapid detector. In the ideal case, the two wavefronts are plane and parallel and limited transversely by the same pupil. Hence, both focal spots are identical Airy distributions, and the interesting part of the electrical signal at the output of the detector is a signal beat, the so-called heterodyne signal, whose frequency is the difference between those of the backscattered signal and local oscillator. The heterodyne efficiency is expressed in the pupil as41,42

00338_psisdg10296_1029606_page_20_1.jpg

In the above formula E0(r) and Es(r) are respectively the local oscillator and the signal electric fields in the pupil, and the integrals are taken over all the surface of the pupil. Ideally, η should be equal to one. This is the case if both Airy distribution spots have the same size, but not necessarily the same intensity.

In practice, the backscattered signal wavefront is affected by the reflection on the target and by atmospheric perturbations41 along the propagation path. The backscattered signal wavefront obtained on the pupil of the system then presents a speckle pattern, and its focal spot is broadened, causing the heterodyne signal to fall down. Moreover, without even being distorted, the backscattered signal wavefront can possibly have a variable direction of propagation or tilt which also affects the heterodyne signal. Therefore, heterodyne detection is highly sensitive to every phenomenon which affects the form and the position of the backscattered signal spot on the detector, or equivalently to the aberrations of the backscaterred wavefront. The sensitivity of heterodyne detection is theoretically well known, but though experimentally it has been qualitatively observed, precise quantitative measurements are difficult to obtain. We apply in the following the LCTV as a programmable wavefront aberrator.

6.2

Tilt and focus

The experimental setup is depicted on Fig. 15. The output of a He-Ne laser is injected in an acousto-optic modulator (AOM) driven by a 45-Mhz acoustic signal. The 1-order is deflected and frequency-shifted with respect to the 0-order. These two beams then enter a Mach-Zendher interferometer. In the first arm, the 0-order is expanded and passed through a circular aperture, and serves as the local oscillator. In the second arm, the 1-order is also expanded, passed through a circular aperture and controlled by the SLM wavefront aberrator. Both local oscillator and signal beams are superimposed by a beam-splitter and then focused on a 1-mm photodiode. The photodiode has a sufficiently large bandwidth to measure both the 45-MHz beat and the continuous signal.

Figure 15:

Experimental set-up for measuring the sensitivity of heterodyne detection to perturbations programmed on the SLM.

00338_psisdg10296_1029606_page_21_1.jpg

Fig. 16 shows the experimental field-of-view of our heterodyne detection system, i.e. the heterodyne efficiency as a function of the tilt angle. This is a direct measurement of the sensitivity to misalignments. The tilt angle is obtained by varying the slope of a Fresnel prism written onto the LCTV. The theoretical form of this curve is known to be43

Figure 16:

Heterodyne efficiency as a function of tilt.

00338_psisdg10296_1029606_page_22_1.jpg
00338_psisdg10296_1029606_page_22_2.jpg

where k = 2π/λ, θ is the tilt angle and R is the radius of the pupil. It can be seen on Fig. 16 that experiment and theory are in good agreement.

In order to determine the defocus sensitivity of the heterodyne efficiency, a Fresnel lens is written onto the SLM, and its focal length varied progressively. In this way, the curvature of the signal beam is modified, whereas the local oscillator remains perfectly focused on the detector. The experimental results are shown in Fig. 17, where the heterodyne efficiency is plotted as a function of the defocus, expressed as a percentage of the focal length variation generated by the SLM normalized to the focal length of the lens (f = 250 mm). The theoretical curve of Fig. 17 is obtained by numerical integration in the pupil plane, assuming plane waves with uniform illumination incident on the SLM and for the local oscillator, and taking into account the propagation between the SLM and the lens. Note that no analytical expression exists as in the case of tilt. Again, experiment and theory agree reasonably well, at least for small values of defocusing. For large defocuses, the experimental heterodyne efficiency tends to a constant whereas theoretically it should tend to zero. This can be explained by the fact that the SLM cannot display perfectly the high frequencies that appear for large defocuses, so that there always is a background of undiffracted light coherent with the local oscillator.

Figure 17:

Heterodyne efficiency as a function of defocus.

00338_psisdg10296_1029606_page_23_1.jpg

6.3

Atmospheric turbulence

The tilt and defocus aberrations considered above are the most simple aberrations that can be conceived, and they serve well to qualify the robustness of heterodyne detection. However, for practical implementations, atmospheric propagation generates turbulent complex wavefronts. It is generally accepted that a good approximation to turbulence is provided by random correlated Kolmogorov phase screens.44 In our method, prior to being written onto the SLM, these phase screens are generated using the method of Gamble and Weeks.45 The strength of turbulence is measured by the reduced parameter 2rLO/r0, where rLO is the radius of the local oscillator and r0 is the Fried radius. Fig. 18 shows an example of a phase screen generated for 2rLO/r0 = 0.2. Fig. 19 shows experimental results obtained by displaying 16 different phase patterns for each value of 2rLO/r0, so that a mean value and a variance could be approximately computed. The theoretical curve was generated by numerical integration in the pupil plane using 100 different phase patterns for each value of 2rLO/r0. It can be seen that the heterodyne efficiency decreases when the diameter of the speckle cells is smaller than the diameter of the local oscillator aperture. For small values of ro, the strength of the turbulence is such that the Kolmogorov phase screen varies very rapidly, and eventually tends to white noise. As in the case of defocus, for large values of 2rLO/r0 there always is a background of undiffracted light which explains the non-zero experimental mean of the heterodyne efficiency.

Figure 18:

Example of a phase screen generated for 2rLO/r0 = 0.2.

00338_psisdg10296_1029606_page_24_1.jpg

Figure 19:

Heterodyne efficiency as a function of the strength of turbulence.

00338_psisdg10296_1029606_page_24_2.jpg

7

CONCLUSION

Our purpose has been to illustrate with four different applications that the availability of a display device allowing 2D control over the phase of a wavefront opens new possibilities to optical systems, in terms of programmability and added functionalities. The phase display device we have characterized and used is an inexpensive liquid-crystal television diverted from its original purpose of video image display. With such devices, quasi-phase or phase-mostly modulation can be achieved independently on a large number of pixels, providing surprisingly good experimental results that follow closely theoretical predictions.

ACKNOWLEDGMENTS

We acknowledge fruitful discussions during this research with Philippe Réfrégier (Ecole nationale Supérieure de Physique de Marseille, Marseille, France), Daniel Dolfi, Cécile Joubert and Brigitte Loiseaux (Thomson-CSF, Laboratoire Central de Recherches).

8

8

REFERENCES

[1] 

M. Born and E. Wolf, Principles of Optics, Pergamon Press, New York (1980). Google Scholar

[2] 

J. W. Goodman, Introduction to Fourier Optics, McGraw-Hill, San Francisco, Calif. (1968). Google Scholar

[3] 

Y. Takaki, “Electro-optical implementation of learning architecture to control point spread function of liquid crystal active lens,” Optical implementation of information processing, Proc. Soc. Photo-opt. Instrum. Eng., 2565 205 –214 (1995). Google Scholar

[4] 

Y. Takaki and H. Ohzu, “Liquid-crystal active lens: a reconfigurable lens employing a phase modulator,” Opt. Commun., 126 123 –134 (1996). Google Scholar

[5] 

Y. Takaki and H. Ohzu, “Reconfigurable lens with an electro-optical learning system,” Appl. Opt., 35 6896 –6908 (1996). Google Scholar

[6] 

J. Amako, H. Miura, and T. Sonehara, “Wave-front control using liquid-crystal devices,” Appl. Opt., 32 4323 –4329 (1993). Google Scholar

[7] 

J. Chen, G. Lai, K. Ishizuka, and A. Tonomura, “Method of compensating for aberrations in electron holography by using a liquid-crystal spatial-light modulator,” Appl. Opt., 33 1187 –1193 (1994). Google Scholar

[8] 

R. Piestun and J. Shamir, “Control of wave-front propagation with diffractive elements,” Opt. Lett., 19 771 –773 (1994). Google Scholar

[9] 

L. Gongalves Neto, D. Roberge, and Y. Sheng, “Programmable optical phase-mostly holograms with coupled-mode modulation liquid-crystal television,” Appl. Opt., 34 1944 –1950 (1995). Google Scholar

[10] 

A. Bergeron, J. Gauvin, F. Gagnon, D. Gingras, H. H. Arsenault, and M. Doucet, “Phase calibration and applications of a liquid crystal spatial light modulator,” Appl. Opt., 34 5133 –5139 (1995). Google Scholar

[11] 

W. Klaus, N. Hashimoto, K. Kodate, and T. Kamiya, “Fast and accurate phase retardation measurement of 0° twisted nematic liquid crystal panels based on the principle of homodyne receiving,” Optical Review, 1 7 –11 (1994). Google Scholar

[12] 

V. Laude, S. Mazé, P. Chavel, and Ph. Réfrégier, “Amplitude and phase coding measurements of a liquid crystal television,” Opt. Commun., 103 33 –38 (1993). Google Scholar

[13] 

K. Lu and B. E. A. Saleh, “Theory and design of the liquid crystal TV as an optical spatial phase modulator,” Opt. Eng., 29 240 –246 (1990). Google Scholar

[14] 

V. Laude, J.-P. Huignard, M. Defour, and Ph. Réfrégier, “Optical image processing with the liquid crystal active lens,” OLM, 3101 139 –145 SPIE1997). Google Scholar

[15] 

V. Laude, “Twisted-nematic liquid-crystal pixelated active lens,” Opt. Commun., 153 134 –152 (1998). Google Scholar

[16] 

V. Laude and C. Dirson, “Liquid-crystal active lens: application to image resolution enhancement,” Opt. Commun., 163 72 –78 (1999). Google Scholar

[17] 

D. Delautre, S. Breugnot, and V. Laude, “Measurement of the sensitivity of heterodyne detection to aberrations using a programmable liquid-crystal modulator,” Opt. Commun., 160 61 –65 (1999). Google Scholar

[18] 

S. Mazé, P. Joffre, and Ph. Réfrégier, “Influence of input information coding for correlation operations,” Optics for Computers: Architecture and Technology, Proc. Soc. Photo-Opt. Instrum. Eng., 1505 20 –31 (1992). Google Scholar

[19] 

J. C. Kirsch, D. A. Gregory, M. A. Thie, and B. K. Jones, “Modulation characteristics of the Epson liquid crystal television,” Opt. Eng., 31 963 –970 (1992). Google Scholar

[20] 

C. Soutar, S. E. Monroe, and J. Knopp, “Complex characterization of the Epson liquid crystal television,” in Optical Pattern Recognition IV, 269 –277 (1993). Google Scholar

[21] 

M. Yamauchi and T. Eiju, “Optimization of twisted nematic liquid crystal panels for spatial light phase modulation,” Opt. Commun., 115 19 –25 (1995). Google Scholar

[22] 

Z. Zhang, G. Lu, and F. T. S. Yu, “Simple method for measuring phase modulation in liquid crystal televisions,” Opt. Eng., 33 3018 –3022 (1994). Google Scholar

[23] 

W. J. Dallas, “Phase quantization – a compact derivation,” Appl. Opt., 10 673 (1971). Google Scholar

[24] 

E. Carcolé, J. Campos, and I. Juvells, “Phase quantization effects on fresnel lenses encoded in low resolution devices,” Opt. Commun., 132 35 –40 (1996). Google Scholar

[25] 

I. Moreno, J. Campos, C. Gorecki, and M. J. Yzuel, “Effects of amplitude and phase mismatching errors in the generation of a kinoform for pattern recognition,” Jpn. J. Appl. Phys., 34 6423 –6432 (1995). Google Scholar

[26] 

P. F. McManamon, E. A. Watson, T. A. Dorschner, and L. J. Barnes, “Applications look at the use of liquid crystal writable gratings for steering passive radiation,” Opt. Eng., 32 2657 –2664 (1993). Google Scholar

[27] 

R. J. Broessel, V. Dominic, and R. C. Hardie, “Image restoration of dispersion-degraded image from a liquid-crystal beam steerer,” Opt. Eng., 34 3138 –3145 (1995). Google Scholar

[28] 

E. C. Tam, S. Zhou, and M. R. Feldman, “Spatial-light-modulator-based electro-optical imaging system,” Appl. Opt., 31 578 –580 (1992). Google Scholar

[29] 

E. C. Tam, “Smart electro-optical zoom lens,” Opt. Lett., 17 369 –371 (1992). Google Scholar

[30] 

J. A. Davis and H. M. Schley-Seebold, “Anamorphic optical systems using programmable spatial light modulators,” Appl. Opt., 31 6185 –6186 (1992). Google Scholar

[31] 

E. Carcolé, J. Campos, and S. Bosch, “Diffraction theory of fresnel lenses encoded in low-resolution devices,” Appl. Opt., 33 162 –174 (1994). Google Scholar

[32] 

R. Silvennoinen, J. Uozumi, and T. Asakura, “Simulation of synthetic Fresnel hologram with pixel phase error function,” J. Optics (Paris), 27 71 –79 (1996). Google Scholar

[33] 

R. K. Tyson, Principles of adaptive optics, Academic Press, San Diego (1991). Google Scholar

[34] 

G. D. Love, J. V. Major, and A. Purvis, “Liquid-crystal prisms for tip-tilt adaptive optics,” Opt. Lett., 19 1170 –1172 (1994). Google Scholar

[35] 

R. Dou and M. K. Giles, “Closed-loop adaptive-optics system with a liquid-crystal television as a phase retarder,” Opt. Lett., 20 1583 –1585 (1995). Google Scholar

[36] 

G. D. Love, “Wave-front, correction and production of Zernicke modes with a liquid-crystal spatial light modulator,” Appl. Opt., 36 1517 –1524 (1997). Google Scholar

[37] 

S. Pelleg, D. Keren, and L. Schweitzer, “Improving image resolution using subpixel motion,” Pattern Recognition Letters, 5 223 –226 (1987). Google Scholar

[38] 

G. Jacquemod, C. Odet, and R. Goutte, “Image resolution enhancement using subpixel camera displacement,” Signal Processing, 26 139 –146 (1992). Google Scholar

[39] 

T. Nummonda, M. Andrews, and R. Kakarala, “High resolution image reconstruction by simulated annealing,” Opt. Commun., 108 24 –30 (1994). Google Scholar

[40] 

R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Transactions on Image Processing, 5 996 –1011 (1996). Google Scholar

[41] 

D. L. Fried, “Optical heterodyne detection of an atmospherically distorted signal wave-front,” in Proc. IEEE, 57 –67 (1967). Google Scholar

[42] 

R. Kingston, Detection of optical and infrared radiation, Springer, Berlin (1978). Google Scholar

[43] 

S. Cohen, “Heterodyne detection: phase front alignment, beam spot size, and detector uniformity,” Appl. Opt., 14 1953 –1959 (1975). Google Scholar

[44] 

V. Tatarski, Wave propagation in turbulent media, Dover Publications, New-York (1961). Google Scholar

[45] 

K. Gamble and A. Weeks, “A computer simulation of a multiple aperture coherent laser radar,” Laser radar technology and applications I, Proc. Soc. Photo-Opt. Intrum. Eng., 2748 220 –231 (1996). Google Scholar
© (1999) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Vincent Laude, Carine Dirson, Dominique Delautre, Sebastien Breugnot, and Jean-Pierre Huignard "Applications of a liquid crystal television used as an arbitrary quasi-phase modulator", Proc. SPIE 10296, 1999 Euro-American Workshop Optoelectronic Information Processing: A Critical Review, 1029606 (2 June 1999); https://doi.org/10.1117/12.365916
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Liquid crystals

Televisions

Modulators

Active optics

Image enhancement

Image processing

Image resolution

RELATED CONTENT


Back to Top