## 1.

## Introduction

The measurement of ocular aberrations has been an issue of interest during the last few decades. Customized aberration correction by refractive surgery or static optical elements, high resolution imaging of the eye fundus, and dynamic and populational statistical studies have benefited from the use of objective aberrometers based on Hartmann-Shack or laser ray tracing wavefront sensors.^{1, 2, 3, 4, 5}

During aberrometric measurements, the subject is generally instructed to fixate on a suitable target to keep the eye pupil at rest with respect to the aberrometer reference frame. However, involuntary fixational eye movements cause the eye to exhibit an erratic translation and torsional trajectory. The translational component, related to horizontal and vertical displacements, presents three main components:^{6, 7} drifts, a slow component with amplitude of 0.02 to
$0.15\phantom{\rule{0.3em}{0ex}}\mathrm{deg}$
; fast microsaccades, with
$25\phantom{\rule{0.3em}{0ex}}\mathrm{ms}$
duration, amplitude of 0.22 to
$1.11\phantom{\rule{0.3em}{0ex}}\mathrm{deg}$
, and frequency of
$0.1\phantom{\rule{0.3em}{0ex}}\text{to}\phantom{\rule{0.3em}{0ex}}0.5\phantom{\rule{0.3em}{0ex}}\mathrm{Hz}$
; and tremors, with very low amplitude
$\left(0.001\phantom{\rule{0.3em}{0ex}}\text{to}\phantom{\rule{0.3em}{0ex}}0.008\phantom{\rule{0.3em}{0ex}}\mathrm{deg}\right)$
but very high temporal frequency
$\left(50\phantom{\rule{0.3em}{0ex}}\text{to}\phantom{\rule{0.3em}{0ex}}100\phantom{\rule{0.3em}{0ex}}\mathrm{Hz}\right)$
. The torsional component describes the rotation of the eye around the line of gaze. The magnitude of these torsional movements is in angular terms superior to the translational ones. The work of van Rijn, van der Steen, and Collewijn established a range of
$\pm 0.5\phantom{\rule{0.3em}{0ex}}\mathrm{deg}$
.^{8} Other studies obtained smaller values around
$\pm 0.18\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\mathrm{deg}$
^{9} and
$\pm 0.27\phantom{\rule{0.3em}{0ex}}\mathrm{deg}$
.^{10}

The precise measurement and characterization of eye movements are a relevant task in areas of research such as visual optics, ophthalmology, and aberrometry.^{11, 12, 13, 14} There are several techniques developed to achieve this goal, such as those based on Purkinje images, corneal reflections, pupil images, retinal structures, and more recently, wavefront aberration data.^{12, 13, 14, 15}

In the field of refractive surgery, most of the eye trackers are based on the analysis of video images of the pupil or of the corneal reflex (or both). These trackers typically present accuracies of the order of
$0.25\phantom{\rule{0.3em}{0ex}}\text{to}\phantom{\rule{0.3em}{0ex}}0.5\phantom{\rule{0.3em}{0ex}}\mathrm{deg}$
, or equivalently
$50\phantom{\rule{0.3em}{0ex}}\text{to}\phantom{\rule{0.3em}{0ex}}100\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$
of lateral displacement of the pupil.^{16, 17, 18}

Currently it is widely accepted that refractive surgery systems can benefit from including an efficient eye tracking system to measure and compensate for involuntary ocular movements. However, the use of eye trackers in eye aberrometry is not so extended, although the fixational ocular movements or displacements of the eye due to replacement of the patient are also present, reducing the repeatability of the measurements and increasing their uncertainty.^{19, 20, 21, 22, 23}

Although fixational eye movements cause lateral displacements and cyclotorsions of the eye with respect to the wavefront sensor reference frame (WSRF), they give rise to nearly displacement-induced changes in the measured aberration.^{24, 25} Therefore, we focus on the effect of translational movements. If the eye displacements are known, the aberration coefficients measured in the aberrometer reference frame can be converted to transform them to the eye pupil reference frame (EPRF) by using a transformation matrix straightforwardly computed by different procedures.^{20, 21, 22} Better estimation of the ocular aberration can thus be achieved, with higher accuracy and reduced uncertainty, therefore leading to a potential improvement in the correction of ocular aberrations through refractive surgery or customized phase plates, and also to a better knowledge of their actual statistical properties.

Most aberrometers present in market or research laboratories are based on Hartmann-Shack (HS) wavefront sensors. These devices estimate eye aberration from the displacements of the centroids of the focal spots of the microlens array. These centroids are determined by processing the raw images of the focal plane of the array taken by a charge-coupled device (CCD) detector. The use of these raw aberrometric images (AI) to get a coarse estimation of the pupil position and movement has been previously reported in the literature. In one example of application, the position is estimated with respect to a mask of concentric circles placed over the computer monitor.^{23} Other works proposed the use of the “centroid of the centroids” of the HS focal spots.^{13} The precision in the first approach is limited by the fact that the pupil rim, imaged onto the microlens array, gives rise to a blurred defocused border at the common focal plane of the microlenses, making it difficult to locate it accurately. In the second one, erroneous readings can be obtained either if the pupil image is not fully contained within the microlens array borders, or it moves less than the separation between two sampling elements, because (due to the unit weight assigned to each focal spot centroid) the overall centroid would keep constant despite the ocular movement.

Estimating the pupil position and displacements from aberrometric images provides some practical advantages. It allows for avoiding the use of a synchronization unit and a conventional eye tracker with its additional optical path, thus simplifying the whole setup. In this work we demonstrate theoretically and experimentally the possibility of using the aberrometric images to track the eye pupil while measuring the ocular wavefront aberration, without some of the limitations of previous methods. Our pupil tracking proposal is based on estimating the eye movements by measuring the displacements of the overall centroid of the whole aberrometric image (see Fig. 1 , left). The approach described here does not require locating the defocused pupil border at the common focal plane of the microlens array, nor is it affected by the fact that the eye pupil may fall outside the microlens array region.

In Sec. 2 we introduce the theoretical background. In Sec. 3 we show the experimental setup and apparatus calibration procedure. Section 4 is devoted to the results obtained with human eyes. In the last section, we present the discussion and conclusions.

## 2.

## Theoretical Background

Let

## Eq. 1

$${u}_{e}\left(\mathbf{r}\right)={a}_{e}\left(\mathbf{r}\right)\mathrm{exp}\left\{ik{W}_{e}\left(\mathbf{r}\right)\right\},$$In a Hartmann-Shack setup, the optical field at the exit pupil of the eye is imaged onto a microlens array composed of $N$ convergent microlenses with focal distance $f$ , whose complex transmittance can be written as:

## Eq. 2

$$t\left(\mathbf{r}\right)=\mathrm{exp}\left\{ik{W}_{a}\left(\mathbf{r}\right)\right\}=\mathrm{exp}\{-i\frac{k}{2f}\sum _{s=1}^{N}{P}_{s}\left(\mathbf{r}\right){(\mathbf{r}-{\mathbf{r}}_{s})}^{2}\},$$The field at the exit plane of the array, assuming for the sake of simplicity that the eye pupil is imaged onto it with unit magnification, is

and the irradiance is given bythus reproducing the irradiance at the eye exit pupil (Fig 1, right). Note that as long as this irradiance distribution does not noticeably change with time inside the eye pupil, the displacements of its centroid are equal to the displacements of the geometrical center of the pupil. This allows estimating the eye movements using a suitable dedicated image channel in the Hartmann-Shack setup.We are interested, however, in avoiding the need of this additional imaging channel. After propagating a distance $z$ (usually, but not necessarily, equal to the focal distance of the microlenses), the field distribution $u\left(\mathbf{r}\right)$ gives rise to the well-known Hartmann-Shack pattern, here referred to as the aberrometric image (AI) (Fig. 1, left). Postprocessing this image, the displacements of the centroids of the focal spots, with respect to their reference positions, can be determined and the wave aberration can be estimated using any of the available approaches for wavefront reconstruction. To use this same image to assess the eye pupil displacements, let us see how its overall centroid (taking AI as a whole) relates to the centroid of the eye pupil irradiance distribution [Eq. 4].

It is well known that, under the Fresnel approximation for homogeneous media, the centroid of any light beam propagates between any two planes separated by a distance
$z$
along a straight line, whose slope is proportional to the irradiance-weighted aberration gradient of the field at the initial plane^{26, 27} according to the expression:

## Eq. 5

$$\mathbf{\rho}\left(z\right)=\mathbf{\rho}\left(0\right)+z\int {I}_{N}\left(\mathbf{r}\right)\nabla W\left(\mathbf{r}\right){\mathrm{d}}^{2}\mathbf{r},$$Equation 5 can be directly applied to the propagation of the overal irradiance centroid from the microlens array $(z=0)$ to the detection plane (at a distance $z$ ), where the aberrometric image is formed. In this case, $I\left(\mathbf{r}\right)$ is computed from the irradiance given by Eq. 4 and $W\left(\mathbf{r}\right)={W}_{e}\left(\mathbf{r}\right)+{W}_{a}\left(\mathbf{r}\right)$ . Let us now consider that the eye pupil moves from its initial position to a new one, transversally displaced by a vector $\mathbf{d}$ . The new centroid positions ${\rho}^{\prime}\left(z\right)$ and ${\rho}^{\prime}\left(0\right)$ obey the expression:

## Eq. 6

$${\mathbf{\rho}}^{\prime}\left(z\right)={\mathbf{\rho}}^{\prime}\left(0\right)+z\int {I}_{N}(\mathbf{r}-\mathbf{d})\nabla [{W}_{e}^{\prime}(\mathbf{r}-\mathbf{d})+{W}_{a}\left(\mathbf{r}\right)]{\mathrm{d}}^{2}\mathbf{r},$$## Eq. 7

$${\mathbf{\rho}}^{\prime}\left(z\right)-\mathbf{\rho}\left(z\right)=\mathbf{d}+z[\int {I}_{N}(\mathbf{r}-\mathbf{d})\nabla {W}_{e}^{\prime}(\mathbf{r}-\mathbf{d}){\mathrm{d}}^{2}\mathbf{r}-\int {I}_{N}\left(\mathbf{r}\right)\nabla {W}_{e}\left(\mathbf{r}\right){\mathrm{d}}^{2}\mathbf{r}]+z\int [{I}_{N}(\mathbf{r}-\mathbf{d})-{I}_{N}\left(\mathbf{r}\right)]\nabla {W}_{a}\left(\mathbf{r}\right){\mathrm{d}}^{2}\mathbf{r},$$The second term on the right-hand side of Eq. 7 is the difference between the spatial averages across the eye pupil, taken at two different times, of the irradiance-weighted eye aberration slopes. If the irradiance distribution is spatially uniform (or rotationally symmetric, or even if it is spatially random with a very short correlation length), all aberration terms with even powers will make no net contribution to each of the aforementioned integrals. This includes defocusing due to accomodation fluctuations, a leading cause of temporal variation of the eye aberration. Odd-power terms will contribute to the integrals, but their net effect in Eq. 7 will depend on the differential eye aberration taken at two different times. To evaluate the order of magnitude of this effect, we used a temporal series of experimentally measured eye aberration coefficients, computing the resulting value of the
$\int {I}_{N}\left(\mathbf{r}\right)\nabla {W}_{e}\left(\mathbf{r}\right){d}^{2}\mathbf{r}$
term, where
${I}_{N}\left(\mathbf{r}\right)$
was modeled by the Stiles-Crawford expression proposed by Applegate and Lakshminarayanan.^{28} We found that for this series, the mean value of the bias term was of the order of
${10}^{-11}\phantom{\rule{0.3em}{0ex}}\mathrm{m}$
and its rms fluctuation was
$0.85\phantom{\rule{0.3em}{0ex}}\mathrm{nm}$
, much smaller than the precission usually required for pupil tracking in eye aberrometry.

On the other hand, the last integral in Eq. 7 is the difference between the irradiance-weighted averages of the phase slope of the microlens array with the eye pupil located at two different positions. If the eye pupil irradiance is spatially uniform and the pupil contains the whole microlens array active area, this integral cancels out. If the pupil is wholly contained within the active area of the microlens array or is partly outside it, the value of this integral is expected to be small, given that the contribution of each microlens not vignetted by the eye pupil will be zero. This strictly holds, as stated before, as long as the irradiance across the eye pupil is constant. This result is independent from any manufacturing aberration that the microlens may present. Small deviations from this nominal behavior can be expected if the irradiance is slowly variable across the pupil.

To get an estimation of the magnitude of this last integral in Eq. 7, we evaluated it numerically for a typical aberrometric situation. In that calculus we modeled again the eye pupil irradiance distribution with the Stiles-Crawford expression proposed in Ref. 28. We considered a pupil diameter of $6.5\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$ , a HS sensor with 89 square microlenses of $564\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ per side and $5.18\text{-}\mathrm{cm}$ focal length, and a fill factor of 100%. The integral result was 0.03 and $0.54\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ for pupil displacements of 50 and $850\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ , respectively. Thus the relative value of the bias induced by this term of Eq. 7 with respect to the magnitude of the pupil displacement is of the order of 6e-4.

## 3.

## Experimental Setup and Calibration

The experimental setup for testing this approach consists of a HS aberrometer that incorporates an additional channel for monitoring the pupil. The sensor and monitoring channels form the images of the spots of the microlens array and the eye pupil onto the same camera chip (Orca 285, Hammamatsu Photonics, Iwata City, Japan). Figure 2 shows a drawing of the system setup. This arrangement allows for the acquisition of both images with the same camera, reducing the cost of the system and providing the necessary synchronization between both channels. The pupil channel was included to use the pupil image (PI) as an element of control. The displacement of the pupil measured using the centroid displacement of the eye pupil was used as the reference to which we compared the movement estimated from the AI (image of the pupil observed through the microlenses array). Figure 1 shows a typical image obtained with the camera (AI on the left and PI on the right).

The microlens array used in our experiments was manufactured in our laboratory. It consists of 89 microlenses $564\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ per side and $5.18\text{-}\mathrm{cm}$ focal length arranged in a square lattice. We illuminate the human eyes with a low coherence laser pointer with a central wavelength at $633\phantom{\rule{0.3em}{0ex}}\mathrm{nm}$ and spectral bandwidth of $50\phantom{\rule{0.3em}{0ex}}\mathrm{nm}$ .

Thresholding of the camera images was performed to identify the background component, the aberrometric image, and the pupil image. The threshold level was obtained by trial and error, based on the visual stability of the shape of the thresholded AI. Figure 3 shows the thresholded aberrometric images of three consecutive pupil positions. The movement of the pupil was estimated from the displacement of the centroid of AI and PI with respect to the initial position.

An artificial eye was used to perform the calibration. This consisted of a green Luxeon Star (Brantford, Ontario) LED (LXHL-MM1D) with an emission spectra centered in $530\phantom{\rule{0.3em}{0ex}}\mathrm{nm}$ with $35\text{-}\mathrm{nm}$ bandwidth, a $100\times $ microscope objective that formed the image of the LED at the object focal plane of a $5\text{-}\mathrm{cm}$ focal length lens. The artificial eye pupil was generated with a circular diaphragm.

The artificial eye was mounted over a precision linear stage to make the calibration. The calibration procedure was simple. We moved the artificial eye transversally $1\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$ across in steps of $50\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ and registered, with the system’s camera at each step, both the aberrometric and pupil image. As we said before, each registered image was thresholded, and the centroids of AI and PI were computed to estimate the movement of the artificial eye with respect to its initial position. Polynomial linear fitting of the estimated displacements to the displacement of the precision linear stage was done. The results of the fitting were used to correct the displacements estimated from the aberrometric and pupil images. In the case of the pupil image, the root mean square error (rms) after calibration was $2.45\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ with a maximum error of $5.00\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ and a regression coefficient of 0.99. In the case of the aberrometric image, we got an rms of $10.45\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ with a maximum error of $20.00\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ and a regression coefficient of 0.99. We represented the centroids obtained from the PI in front of the ones achieved with the AI after performing the calibration (see Fig. 4 ), obtaining a regression coefficient of 0.99. It is clear from these results that the movements of the artificial eye can be estimated by using the aberrometric image.

## 4.

## Results with Human Eyes

We have shown in the precedent section that the procedure designed for estimating the movement of the pupil from the AI works well with artificial eyes. Let us now test the proposal with human eyes. In this analysis the movement of the pupil estimated from the PI was used as the control element, against which we evaluate the performance of the pupil tracking based on the aberrometric image. We present here the results of three sequences obtained with two different eyes. We show in Table 1 the mean error, the root mean square error, and the signal-to-noise ratio (SNR) of the AI computed as the mean value of the difference between the plateau of the AI and the background level, divided by the rms of the intensity of the plateau.

## Table 1

Mean bias and rms of the eye-tracking procedure in function of SNR of the AI.

Sequence 1 | Sequence 2 | Sequence 3 | |
---|---|---|---|

Mean error $\left(\mu \mathrm{m}\right)$ | 57 | 2 | 10 |

rms $\left(\mu \mathrm{m}\right)$ | 27 | 10 | 6 |

SNR | 6 | 8 | 11 |

Table 1 shows the high accuracy and precision achieved, similar to commercially available eye trackers. The value of the mean error was computed as the mean of the differences obtained for every position between the centers estimated from the centroid of the AI and PI. The rms is the root mean square value of the differences obtained between the estimated centers. In Table 1 we can also observe that, as all video-based eye trackers, its performance depends highly on the signal-to-noise ratio of the AI. Figure 5 shows one example application of the pupil tracking procedure. In particular, this figure presents the trajectory of one eye during a long-term measurement of ocular aberrations (circles for the positions estimated from the AI and triangles for the PI).

Finally, we used some of our aberrometric sequences to estimate the ocular wavefront, including information of the pupil position obtained with the proposed pupil tracking approach. We used 37 microlenses, and estimated 20 Zernike polynomials (excluding piston) over a pupil diameter of $4\phantom{\rule{0.3em}{0ex}}\mathrm{mm}$ . We show in Table 2 the results of this analysis. To compare the estimation of the wavefront with respect to the EPRF and the WSRF, we computed the mean (⟨rms⟩) and standard variation $\left({\sigma}_{\mathrm{rms}}\right)$ of the modulus of the modal coefficients (neglecting the tilt components) estimated with respect to both frames. The average was done along the different measurements included on each aberrometric sequence. The related ocular trajectories are presented in Fig. 6 .

## Table 2

Mean and standard deviation of the wavefront measurement sequences rms for three different subjects.

⟨rms⟩ (μm) | σrms (μm) | ||
---|---|---|---|

Subject 1 | (a) | $\mathrm{EPRF}=1.632$ | 0.056 |

$\mathrm{WSRF}=1.398$ | 0.157 | ||

(b) | $\mathrm{EPRF}=1.674$ | 0.029 | |

$\mathrm{WSRF}=1.828$ | 0.091 | ||

Subject 2 | $\mathrm{EPRF}=0.629$ | 0.041 | |

$\mathrm{WSRF}=0.656$ | 0.046 | ||

Subject 3 | $\mathrm{EPRF}=1.487$ | 0.045 | |

$\mathrm{WSRF}=1.493$ | 0.051 |

Table 2 shows the differences between the mean value and standard deviation of the ocular wavefront measured with respect to the EPRF and WSRF. Also, we can see that, for the particular sequences analyzed, the value of the wavefront standard deviation is lower in the EPRF. Additionally, as can be observed in the table, the magnitude of the difference between both reference frames depends highly on the ocular trajectory and ocular aberration, which agrees with the conclusions presented in Refs. 16, 19.

## 5.

## Discussion and Conclusions

Ocular movements during steady-state fixation have been identified as one of the main sources of variability of measured ocular aberrations. Several authors have proposed different methods to estimate the position of the pupil before and during wavefront measurement.^{13, 23} We propose a different approach based on the measurement of the centroid of the aberrometric image. Estimating the pupil position and displacements from aberrometric images provides some practical advantages. It allows avoiding the use of a synchronization unit and a conventional eye tracker with its additional optical path, thus simplifying the whole setup.

The proposed method is developed for measuring lateral pupil displacement, and it is not capable of measuring torsional movements. However, although the magnitude of torsional movements is (in angular terms) superior to the translational ones, their influence on wavefront estimation is significantly inferior.^{24, 25} Also, we want to clarify for those readers interested in eye tracking applications that the proposed method does not distinguish between eye translation due to head movements and rotations of the eye, it only measures the pupil displacement from a reference position independently on the origin of the movement.

Additionally we point out that the presented method might be biased if there is asymmetrical pupil dilation between frames. In those cases a bias induced by the centroid shift would be added to the pupil displacement, caused by the asymmetrical change of shape. In its actual form our method does not detect the change of shape, and therefore no correction can be planned. However, a higher sophistication of the aberrometric image processing algorithm would allow for the extraction of additional information of the pupil shape from the pupil’s Fresnel shadow formed at the HS detection plane, and therefore could detect asymmetrical pupil dilation and decide what to do with those frames.

In Sec. 3 we comment the use of the thresholding processing for segmentation of the aberrometric image previous to the centroid computation to distinguish those pixels that belong to the aberrometric image from those of the background level. In this work the threshold level is obtained by trial and error, based on the visual stability of the shape of the thresholded AI. However, there are different thresholding criterions that can be used to increase the performance of the pupil tracking system.^{29} In this sense, the use of complex thresholding algorithms or fitting algorithms, which take into account image histograms or the pixel SNR, would contribute to increase the precision of the pupil tracker.^{30, 31}

Taking into account the presented numerical and experimental analysis, we consider that the proposed method can be used with existing Hartmann-Shack sensors, since no changes in the system setup would be involved. We also think that the method can be used with existing databases of Hartmann-Shack images for estimating the pupil position and then reprocess the wavefront measurements through available procedures^{20, 22, 32} to get a new set of estimated coefficients clear of the error induced by the ocular movements. By contrast, as we mentioned before, the performance of the method depends highly on the SNR of the aberrometric image, like all the methods based on centroid computation. So, further work must be done before any extensive use of the method occurs to optimize the aberrometric image acquisition-processing, and so reduce this dependence and increase confidence in the method.

We want to also point out that the results presented in this work are limited by the theoretical assumptions made in the numerical validation (Fresnel propagation of the optical field and temporal constancy of the irradiance pupil distribution), and by the reduced number of eyes used in the experimental analysis.

In conclusion we propose and demonstrate theoretically and experimentally (with artificial and human eyes) that Hartmann-Shack aberrometric images can be used not only for wavefront sensing but also to measure the position of the eye pupil during aberrometric measurement. We obtain with our method a root mean square error of $6\phantom{\rule{0.3em}{0ex}}\text{to}\phantom{\rule{0.3em}{0ex}}27\phantom{\rule{0.3em}{0ex}}\mu \mathrm{m}$ , depending on the aberrometric image signal-to-noise ratio. Improvements in accuracy and precision are expected to be achieved in future works by improving the signal-to-noise ratio of the aberrometric image and selecting a better thresholding criterion.

The knowledge of the relative position of the eye pupil with respect to the sensor reference frame allows expressing the estimated wavefront coefficients with respect to the pupil reference frame, and thus obtaining a better estimation of the ocular aberration, with higher accuracy and reduced uncertainty, which leads to a potential improvement in the correction of ocular aberrations through refractive surgery or customized phase plates. Moreover, it would lead to a better knowledge of the dynamics of the ocular aberrations, and to the achievement of statistical models that would contribute to the development of more accurate wavefront sensors.

## Acknowledgments

This work was supported by the Spanish Ministerio de Educación y Ciencia, grant FIS2005-05020-C03-02, Ministerio de Ciencia e Innovación, grant FIS2008-03884, and the European Regional Development Fund. Arines wants to acknowledge financial support from the Isidro Parga Pondal Programme 2009 (Xunta de Galicia, Spain).