Open Access
9 July 2013 Minimizing eyestrain on a liquid crystal display considering gaze direction and visual field of view
Author Affiliations +
Abstract
Recently, it has become necessary to evaluate the performance of display devices in terms of human factors. To meet this requirement, several studies have been conducted to measure the eyestrain of users watching display devices. However, these studies were limited in that they did not consider precise human visual information. Therefore, a new eyestrain measurement method is proposed that uses a liquid crystal display (LCD) to measure a user’s gaze direction and visual field of view. Our study is different in the following four ways. First, a user’s gaze position is estimated using an eyeglass-type eye-image capturing device. Second, we propose a new eye foveation model based on a wavelet transform, considering the gaze position and the gaze detection error of a user. Third, three video adjustment factors—variance of hue (VH), edge, and motion information—are extracted from the displayed images in which the eye foveation models are applied. Fourth, the relationship between eyestrain and three video adjustment factors is investigated. Experimental results show that the decrement of the VH value in a display induces a decrease in eyestrain. In addition, increased edge and motion components induce a reduction in eyestrain.

1.

Introduction

Currently, various display devices, such as the plasma display panel (PDP), liquid crystal display (LCD), light-emitting diode, active-matrix organic light-emitting diode, and stereoscopic TV, are being manufactured. The use of these display devices is becoming increasingly widespread, with the devices being rapidly adopted for laptop computers, mobile phones, high-definition TV (HD TV), and so on. Many manufacturers and consumers are interested in the attributes of these display devices, including their field of view, spatial resolution, response speed, and degree of motion blur. In addition to these kinds of quantitative characteristics, consumers expect good display capability in terms of human factors.

Researchers have previously measured the eyestrain of users watching display devices.18 Some of these studies compared the levels of eyestrain caused by watching LCD and PDP devices based on the change in pupil size, eye blinking, and subjective tests.13 Other studies investigated the relationships between the eyestrain caused by an LCD device and video factors such as brightness, contrast, saturation, hue, edge difference, and scene changes.4,5 In addition, the eyestrain caused by a stereoscopic display was examined using a subjective measurement method, optometric instrument-based measurement method, optometric clinically based measurements, and brain activity measurements.6,7 In previous research, the eyestrain caused by two- and three-dimensional (2-D and 3-D) displays was compared using the average blinking rate (BR).8 However, most previous studies did not consider human visual information, such as the gaze position and the visual field of view, for estimating eyestrain. For instance, Lee and Park measured eyestrain on the basis of the change in pupil size in relation to the changes in four adjustment factors: brightness, contrast, saturation, and hue.5 However, each factor was calculated from the whole image in the display without considering the influence of the human gaze position. Other factors, such as edge difference and scene change, were also calculated from the whole image in the display.4 In other words, these studies were conducted under the assumption that every region in a given image on the display was perceived equally by the subject. To overcome this problem, a new eye foveation model is proposed here that considers a user’s gaze position and the error of gaze detection. Three video adjustment factors—variance of hue (VH), edge, and motion information—are extracted from the successive images in the displays in which these eye foveation models are applied.

This article is organized as follows. In Sec. 2, the proposed device for gaze tracking and eye response measurement and the methods of analysis are presented. In Sec. 3, the methods for extracting video features, considering the gaze position and the foveation-based visual field of view, are explained. The experimental setup and results are presented in Sec. 4. Finally, Sec. 5 presents the conclusion of this article and the plans for future work.

2.

Proposed Device and Analysis Methods

2.1.

Device for Measuring Gaze Position and Eye Response

Figure 1 shows the proposed gaze tracking and eye response measurement device.8,911 The eye-capturing camera is attached to an eyeglass frame near the lower part of one eye, as shown in Fig. 1. The camera is a small web camera with universal serial bus port that captures the images at a speed of 15frames/s. The spatial resolution of the captured image is 640×480pixels. A zoom lens is used to capture the magnified images of the eye. To screen out visible light, a near-infrared (NIR) passing filter is attached to the camera lens.8,911

Fig. 1

Eyeglass-type eye-image capturing device.

OE_52_7_073104_f001.png

Figure 2 shows an example of the experimental setup. Four NIR illuminators of 850 nm each are attached to an LCD display.8,911 They do not affect the user’s vision because an NIR light of 850 nm does not dazzle the user’s eye. The four NIR illuminators produce four corneal specular reflections, as shown in Fig. 3, which represent the rectangular area of display since these illuminators are attached to its four corners.8,9

Fig. 2

Example of experimental setup of four near-infrared (NIR) illuminators attached to the corners of the liquid crystal display (LCD).

OE_52_7_073104_f002.png

Fig. 3

Example of four specular reflections and detection results.

OE_52_7_073104_f003.png

2.2.

Gaze Tracking Method

As a user-dependent calibration, each user first gazes at a central position on the display, which is required to compensate the angle kappa, which is the angular offset between the visual and the pupillary axis.9,11 Using the captured eye image, the pupil’s center is detected on the basis of circular edge detection, local binarization, component labeling, size filtering, filling of the specular reflection area, and calculation of the geometric center of the remaining black pixels as the pupil center.911 Figure 3 shows the four specular reflections of the four NIR illuminators attached to the corners of the LCD screen. These reflections are located by binarization, component labeling, and size filtering.9 The four specular reflections represent the rectangular area of the display. Therefore, on the basis of the detected pupil center and the four specular reflections, the user’s gaze position on the display is calculated according to the geometric transform between the rectangle formed by the four reflections and the rectangle of the display.9,11

2.3.

Eye Response Measurement

In this research, the average eye BR is used for measuring eyestrain. In previous researches,12,13 the increase in the BR can be observed as the function of time on task. Based on these researches, previous studies measured eyestrain, with more frequent blinking corresponding to greater eyestrain.2,4 The average BR is calculated in a time window of 60 s; the time window here is moved with an overlap of 50 s.

3.

Extraction of Video Features by Considering Gaze Position and Visual Field of View

3.1.

Contrast Sensitivity Model Based on Foveation

To measure visual sensitivity according to the gaze position and angular offset, it is necessary to determine the function of retinal eccentricity. For this, previous research on visual sensitivity is referenced, which showed that visual sensitivity reduced as the distance from the gaze position increased. The algorithm for calculating sensitivity, which has been employed to improve image and video coding efficiency, is called foveation.1417 In this research, eyestrain is measured by calculating a user’s gaze position and by determining the user’s visual information on the basis of foveation. Humans perceive a dramatic decrease in their visual sensitivity in areas away from the point of gaze. In detail, the point of gaze is perceived with high resolution, but the perceived degree of resolution is decreased according to the increase in the distance from this point. Accordingly, a foveation (visual field of view) model based on the gaze information is defined. The foveation is determined using the contrast threshold (CT) formula, which is based on human contrast sensitivity (CS) data measured as a function of spatial frequency and retinal eccentricity.1416

Eq. (1)

CT(f,e)=CT0exp(αfe+e2e2),
where f is the spatial frequency (cycles per degree), e is the retinal eccentricity (degrees), CT0 is the minimum contrast threshold, α is the spatial frequency decay constant, e2 is the half-resolution eccentricity, and CT is the visible contrast threshold.1416 The optimal fitting parameters are determined on the basis of previous research (α is 0.106, e2 is 2.3, and CT0 is 1/64).14,16 The CS is defined as the reciprocal proportion of the CT.14,16

Eq. (2)

CS(f,e)=1CT(f,e).

To apply these models to an image, the eccentricity needs to be calculated for any point x=(x1,x2)T (pixels) in the image. Because a user’s gaze position is the foveation point, xf=(x1f,x2f), the distance from x to xf is given by the following equation:14,16

Eq. (3)

d(x)=xxf2.

Further, the eccentricity is obtained as follows:14,16

Eq. (4)

e(v,x)=tan1[d(x)Nv],
where N is the width of the image and v is the viewing distance (measured in image width) from the eye to the image plane.14,16 The cut-off frequency fc, which is an unperceivable high-frequency component, can be obtained by setting CT as 1 (the maximum possible contrast) in Eq. (1):14,16

Eq. (5)

fc(e)=e2ln(1CT0)α(e+e2).

According to the Nyquist–Shannon sampling theorem, the highest frequency that meets the display Nyquist frequency is as follows:14,16

Eq. (6)

fd(v)=πNv360.

Combining Eqs. (5) and (6), the final cut-off frequency fm is obtained as follows:14,16

Eq. (7)

fm(v,x)=min{fc[e(v,x)],fd(v)}.

Finally, the foveation-based error sensitivity is defined in the following equation 14,16 and in Fig. 4:

Eq. (8)

Sf(v,f,x)={CS[f,e(v,x)]CS(f,0),ifffm(v,x)0,otherwise.

Fig. 4

Foveation-based contrast sensitivity.

OE_52_7_073104_f004.png

In Fig. 4, a brighter region represents higher contrast sensitivity.

3.2.

New Foveated Weighing Model in the Wavelet Domain by Considering Gaze Detection Error

A foveation-based visual sensitivity model in the wavelet domain has been proposed previously as follows:14,16

Eq. (9)

S(v,x)=[Sw(λ,θ)]β1·{Sf[v,fd2λ+1,dλ,θ(x)]}β2xBλ,θ,
where λ is the wavelet decomposition level and θ represents the LL, LH, HH, or HL subbands of the wavelet transform. β1 and β2 are the parameters that control the magnitudes of Sw and Sf, respectively.14,16 The LL subregion has low-frequency components in both horizontal and vertical directions. The HH subregion includes high-frequency components in the horizontal and vertical directions. The HL subregion comprises high-frequency components in the horizontal direction and low-frequency components in the vertical direction. Finally, the LH subregion contains low-frequency components in the horizontal direction and high-frequency components in the vertical direction.18 Sw (λ, θ) is the error sensitivity in subband (λ, θ); the method for calculating Sw (λ,θ) is shown in Refs. 14 and 16. For a given wavelet coefficient at position xBλ,θ [where Bλ,θ is the set of wavelet coefficient positions existing in subband (λ, θ)], the distance from the foveation point in the spatial domain is shown in Refs. 14 and 16:

Eq. (10)

dλ,θ(x)=2λxxλ,θf2forxBλ,θ.

The explanations given in Eqs. (1)–(10) represent the conventional foveation model of Refs. 14 and 16, but they do not consider the errors in gaze detection when calculating the foveation model. In general, there inevitably exists an error in gaze detection between the ground-truth position and the calculated gaze position.911 However, the above foveation-based visual sensitivity model of Eqs. 9 and 10 and Fig. 4 does not consider this error.

Therefore, we propose an eye foveation model that considers the gaze position and the error in detecting it, as follows. Since N is the width of an image and v is the viewing distance (measured in image width) from the eye to the image plane,14,16 Nv is the calculated Z distance from the user’s eye to the image plane. Assuming that ε is the accuracy of the gaze tracking (degrees), the consequent gaze detection error is calculated as Nvtanε. In the range of the gaze detection error (Nvtanε), all the positions (x) should be treated as the same for the foveation (user’s gaze) position (xλ,θf) since the error boundary is Nvtanε. Thus, dλ,θ(x) of Eq. (10) becomes 0. Consequently, Eq. (10) is rewritten as Eq. (11), considering the gaze detection error:

Eq. (11)

dλ,θ(x)={0,if2λxxλ,θf2<Nvtanε2λxxλ,θfNvtanε,otherwise.

Based on Eqs. (9) and (11), the foveation-based contrast sensitivity mask of the single foveation point (gaze point) in the wavelet domain is found as shown in Fig. 5(b). The four-level discrete wavelet transform (DWT) based on Daubechies wavelet bases is used. Brightness indicates the importance of the wavelet coefficients. Higher-contrast sensitivity is shown as a brighter gray level.

Fig. 5

Foveation-based contrast sensitivity mask in the wavelet domain. (a) Sensitivity mask not considering the gaze tracking error (Refs. 14 and 16). (b) Sensitivity mask considering the gaze tracking error (proposed method).

OE_52_7_073104_f005.png

3.3.

Extracting Video Features Considering the Eye Foveation Model

In this research, eyestrain is measured in relation to the changes in the three adjustment features of video: VH, edge, and motion information. To extract features considering gaze position and foveation, foveated images are obtained as follows.

The original color image is first separated into three images of red, green, and blue channels. These three images are decomposed using a DWT based on Daubechies wavelet bases.

The decomposed three images are multiplied by the foveation-based contrast sensitivity mask of Fig. 5(b). From these three foveated images, three images of the red, green, and blue channels in the spatial domain are obtained by the inverse procedure of DWT.18 With these three images in the spatial domain, the hue image is obtained based on the conversion matrix of RGB to hue, saturation, and intensity (HSI),18 and the VH is obtained as the first feature.

To obtain the motion component (MC) and edge component (EC), the original RGB color image is first transformed into a gray one, and the gray image is decomposed using a DWT based on Daubechies wavelet bases. The decomposed (gray) image is multiplied by the foveation-based contrast sensitivity mask of Fig. 5(b). Figure 6 shows an example of the original gray image and the corresponding foveated one by the proposed method. From the foveated image, the gray image in the spatial domain is obtained by the inverse procedure of DWT.18 The MC and EC are extracted as the second and third features, respectively, from the gray image in the spatial domain. The average magnitude calculated by the Canny edge detector in a gray image is determined as the value of EC. The average pixel difference between successive gray images is determined as the value of MC.

Fig. 6

Example of foveated image. (a) Original image. (b) Foveated image of (a) obtained by the proposed method considering the gaze detection error (user’s foveation position is a white crosshair).

OE_52_7_073104_f006.png

The VH is averaged in a time window of 60 s, and the time window is moved with an overlap of 50 s, as in the method for measuring BR. The MC and EC are also obtained by the same method. Using the calculated features of the foveated images, the eyestrain based on the average BR (Sec. 2.3) is measured in relation to changes in the three adjustment features of video: VH, MC, and EC.

Figure 7 shows some examples of extracted features in video images captured by a commercial web camera. Figure 7(a) shows an original image. Figure 7(b), 7(c), and 7(d) shows the hue image, motion image, and edge image obtained from the original one, respectively. The measured feature values of VH, MC, and EC of Fig. 7(b), 7(c), and 7(d) are 16495.05, 24.28, and 30.84, respectively.

Fig. 7

Examples of extracted features in a video image. (a) Original image. (b) Hue image. (c) Motion image. (d) Edge image. (e) Original gray image including the foveation point as a white crosshair. (f) Hue image after applying the conventional foveated model (Refs. 14 and 16). (g) Motion image after applying the conventional foveated model (Refs. 14 and 16). (h) Edge image after applying the conventional foveated model (Refs. 14 and 16). (i) Hue image after applying the proposed foveated model. (j) Motion image after applying the proposed foveated model. (k) Edge image after applying the proposed foveated model.

OE_52_7_073104_f007.png

Figure 7(e) shows an original gray image including the foveation point as a white crosshair. Figure 7(f), 7(g), and 7(h), respectively, shows the hue image, motion image, and edge image obtained from the foveated one by the conventional foveated model.14,16 The measured feature values of VH, MC, and EC of Fig. 7(f), 7(g), and 7(h) are 16879.22, 14.43, and 9, respectively.

Figure 7(i), 7(j), and 7(k), respectively, shows the hue image, motion image, and edge image obtained from the foveated one by the proposed foveated model. The measured feature values of VH, MC, and EC of Fig. 7(i), 7(j), and 7(k) are 16858.78, 15.31, and 11.15, respectively, which are different from those determined by the previous method,14,16 not considering the gaze tracking error.

To measure eyestrain in this research, a commercial 19-in LCD monitor and a commercial movie file were used. The environmental lighting condition was maintained without any external illumination. The temperature and humidity were kept constant, and there was no vibration or bad odor that could affect the experiments. Each subject watched the movie for 25 min 30 s. The data of eye response were collected from 24 subjects [average age of 26.54 (standard deviation: 2.24); minimum and maximum ages were 23 and 31, respectively]. To remove the dependency of watching distance (from the user’s eye to the monitor) while considering the actual cases of watching distances, the data of 12 subjects were obtained at a watching distance of 60 cm, and the data of the remaining 12 subjects were collected at a distance of 90 cm.

4.

Experimental Results

As mentioned in Sec. 2.3, in previous researches,12,13 the increase in BR can be observed as the function of time on task. Based on these researches, previous studies measured eyestrain, with more frequent blinking corresponding to greater eyestrain.2,4 Accordingly, the eyestrain based on BR was measured according to extracted features (VH, MC, and EC). To validate the relationship between these three features and eye responses, a correlation analysis was performed. In this analysis, the correlation coefficient ranges from 1 to +1. A correlation coefficient close to +1 indicates that two variables are positively related; if it is close to 1, it indicates that two variables are negatively related. If it is close to 0, there is no relationship between the variables. Table 1 shows the relationship between these three features and eye responses, in which the results are calculated by removing outliers based on the confidence interval of 95%. Because the scales of the VH, MC, EC, and BR are different, the values are normalized using the minimum–maximum scaling method.19

Table 1

Relationship between three adjustment features and eye responses (average value of experimental data from 24 subjects).

Eye responsesAdjustment featuresAverage correlation coefficientAverage gradientAverage R2
Blinking rateVariance of Hue (VH)0.41150.26440.2310
Motion component (MC)0.40590.32730.2095
Edge component (EC)0.50780.33870.3455

As listed in Table 1, the average correlation coefficients between these adjustment factors (VH, MC, and EC) and BR were calculated as 0.4115, 0.4059, and 0.5078, respectively. Based on the average correlation coefficient in Table 1, we found that the adjustment of VH is positively related to eyestrain, whereas the adjustments of MC and EC are negatively related to eyestrain. Therefore, the increase in VH causes the increase in eyestrain, and the increase in MC and EC reduces eyestrain.

The average gradient is the slope of the fitted line by linear regression, and it represents the rate of change of VH, MC, or EC according to that of BR. The linear regressions were also performed to analyze the change in eye response in relation to the change in the adjustment factors in Table 1. On the basis of the results (average gradient) of linear regression, it is observed that if the MC or EC increases, the eyestrain decreases. In contrast, if the VH increases, the eyestrain also increases. The R2 values between the three adjustment factors and BR were calculated as 0.2310, 0.2095, and 0.3455, respectively. In Tables 1 and 2, and Fig. 8, R2 refers to the degree of fitting when using the regression method.20 In general, greater values of R2 represent a better fit. Figure 8 shows the examples of 2-D dot graphs of one subject, where one dot denotes the average BR and its corresponding adjustment factors (VH, MC, and EC).

Table 2

Experimental values from 24 subjects.

Subject numberCorrelation coefficientGradientR2
VHMCECVHMCECVHMCEC
10.48360.8010.78490.36810.5710.63160.25620.64170.616
20.52410.61580.44130.34160.41930.25850.27470.37920.1948
30.27470.46090.77690.15050.31130.53470.07540.21250.6036
40.00260.25980.37630.00120.16760.150200.06750.1494
50.13910.09480.1540.07150.06390.07080.01930.0090.0237
60.62220.57520.53620.30790.42690.21820.38720.33090.2875
70.62220.36340.76170.43140.34730.5620.38710.1320.5802
80.58530.45310.69550.46350.37580.57010.34250.20530.4837
90.57180.32140.68620.36640.25340.4320.32690.10330.4709
100.58690.05780.34020.31270.03950.12270.34450.00330.1157
110.64980.15030.51360.49040.11960.31490.42220.02260.2637
120.20620.55210.69540.12020.6020.47950.04250.30070.4836
130.00820.38650.06670.00460.34820.03400.14940.0045
140.57860.2410.76090.31230.18280.53440.33480.05810.579
150.18890.65210.69760.08670.46130.41450.03570.42530.4866
160.40280.36720.23770.22420.29490.12120.16230.13490.0565
170.15780.01290.46530.08610.00920.29730.02490.00020.2165
180.59660.40430.10590.34950.30920.04970.3560.16350.0112
190.36860.48330.85520.22460.4480.67390.13590.23360.7314
200.40480.61030.7680.27560.55590.56750.16390.37250.5818
210.69520.74590.82740.50430.54240.59420.48330.55640.6848
220.73340.14630.66590.47090.13140.36750.53780.02140.4434
230.63660.4010.39060.46980.3050.23270.40520.16080.1526
240.15770.58570.26460.08660.56890.14150.02490.3430.07

Fig. 8

Graph and linear regression results for one subject. (a) Relationship between blinking rate (BR) and variance of Hue (VH). (b) Relationship between BR and motion component (MC). (c) Relationship between BR and edge component (EC).

OE_52_7_073104_f008.png

Because the y-intercept points of the fitted lines (the point where the fitted line is intercepted with the y-axis) and the degrees of distributions of all data of the 24 subjects are different for each subject, it is difficult to obtain a meaningful result from the average of all subjects. Instead, we included both the average results and all the results of the 24 subjects in Tables 1 and 2, respectively.

Figure 9 shows the examples of gaze detection results. The circles represent the reference points at which each subject should look, and crosshairs show the gaze points that are calculated by our gaze detection algorithm (explained in Sec. 2.2). A total of five subjects tried to look at the nine reference points five times, and each crosshair shows the average point of five trials per each subject. We measured the gaze detection error as the angle between the vector to the reference point and the vector to the calculated gaze position. The gaze detection error between the reference and gaze points was about 1.12 deg. As seen in Fig. 9, the reference points show differences from the calculated gaze points. In other words, the gaze error for each subject can occur randomly inside the circle whose radius is 1.12 deg, and we consider this circle in the case of generating the eye foveation model. Therefore, the eye foveation model without this gaze detection error, as shown in Fig. 5(a), is different from the proposed eye foveation model, which considers the gaze detection error, as shown in Fig. 5(b).

Fig. 9

Examples of gaze detection results (the circles represent the reference points at which each user should look; the crosshairs show the positions that are calculated by our gaze detection algorithm).

OE_52_7_073104_f009.png

5.

Conclusion

This research introduced a new eyestrain measurement method considering an eye foveation model. On the basis of this measurement, it was confirmed that a stable relationship exists between the eyestrain and the three adjustment factors—color information, edge, and motion information. Experimental results showed that a greater degree of VH induced higher eyestrain. On the contrary, a greater degree of the EC and MC induced relatively lower eyestrain. With the recent developments in television technology, the smart TV, which includes a built-in camera, has become widespread. On the basis of the results of this research, an intelligent display can be expected that has the functionality of reducing the user’s eyestrain by decreasing the VH or increasing the edge and motion information of a video based on the eye response measured by the built-in camera.

In future works, the relationship between eyestrain and video factors in various kinds of displays, such as 3-D stereoscopic or holographic displays, would be researched on the basis of gaze detection and the proposed foveation model.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No. 2012R1A1A2038666).

References

1. 

M. Takahashi, “LCD vs. PDP picture quality status and the task of FPD TVs,” in Proc. Korean Display Conf., (2006). Google Scholar

2. 

E. C. Leeet al., “Measuring the degree of eyestrain caused by watching LCD and PDP devices,” Int. J. Ind. Ergon., 39 (5), 798 –806 (2009). http://dx.doi.org/10.1016/j.ergon.2009.02.008 0169-8141 Google Scholar

3. 

A. Okada, “Physiological measurement of visual fatigue in the viewers of large flat panel display,” in Proc. Korean Display Conf., (2006). Google Scholar

4. 

E. C. Leeet al., “Minimizing eyestrain on LCD TV based on edge difference and scene change,” IEEE Trans. Consum. Electron., 55 (4), 2294 –2300 (2009). http://dx.doi.org/10.1109/TCE.2009.5373801 ITCEDA 0098-3063 Google Scholar

5. 

E. C. LeeK. R. Park, “Measuring eyestrain from LCD TV according to adjustment factors of image,” IEEE Trans. Consum. Electron., 55 (3), 1447 –1452 (2009). http://dx.doi.org/10.1109/TCE.2009.5278012 ITCEDA 0098-3063 Google Scholar

6. 

M. Lambooijet al., “Visual discomfort and visual fatigue of stereoscopic displays: a review,” J. Imaging Sci. Technol., 53 (3), 030201 (2009). http://dx.doi.org/10.2352/J.ImagingSci.Technol.2009.53.3.030201 JIMTE6 1062-3701 Google Scholar

7. 

M. MenozziC. KornfeldA. Polti, “Visual stress, and performance using autostereoscopic displays,” Innovationen für Arbeit und Organisation, GfA Press, Dortmund (2006). Google Scholar

8. 

E. C. LeeH. HeoK. R. Park, “The comparative measurements of eyestrain caused by 2D and 3D displays,” IEEE Trans. Consum. Electron., 56 (3), 1677 –1683 (2010). http://dx.doi.org/10.1109/TCE.2010.5606312 ITCEDA 0098-3063 Google Scholar

9. 

J.-S. Choiet al., “Enhanced perception of user intention by combining EEG and gaze-tracking for brain-computer interfaces (BCIs),” Sensors, 13 (3), 3454 –3472 (2013). http://dx.doi.org/10.3390/s130303454 SNSRES 0746-9462 Google Scholar

10. 

J. W. Leeet al., “3D gaze tracking method using Purkinje images on eye optical model and pupil,” Opt. Lasers Eng., 50 (5), 736 –751 (2012). http://dx.doi.org/10.1016/j.optlaseng.2011.12.001 OLENDN 0143-8166 Google Scholar

11. 

C. W. Choet al., “Gaze detection by wearable eye-tracking and NIR LED-based head-tracking device based on SVR,” ETRI J., 34 (4), 542 –552 (2012). http://dx.doi.org/10.4218/etrij.12.0111.0193 1225-6463 Google Scholar

12. 

K. KanekoK. Sakamoto, “Spontaneous blinks as a criterion of visual fatigue during prolonged work on visual display terminals,” Percept. Mot. Skills, 92 (1), 234 –250 (2001). http://dx.doi.org/10.2466/pms.2001.92.1.234 PMOSAZ 0031-5125 Google Scholar

13. 

J. A. SternD. BoyerD. Schroeder, “Blink rate: a possible measure of fatigue,” Hum. Factors, 36 (2), 285 –297 (1994). HUFAA6 0018-7208 Google Scholar

14. 

Z. WangL. LuA. C. Bovik, “Foveation scalable video coding with automatic fixation selection,” IEEE Trans. Image Process., 12 (2), 243 –254 (2003). http://dx.doi.org/10.1109/TIP.2003.809015 IIPRE4 1057-7149 Google Scholar

15. 

W. S. GeislerJ. S. Perry, “A real-time foveated multiresolution system for low-bandwidth video communication,” Proc. SPIE, 3299 294 –305 (1998). http://dx.doi.org/10.1117/12.320120 PSISDG 0277-786X Google Scholar

16. 

Z. Wanget al., “Foveated wavelet image quality index,” Proc. SPIE, 4472 42 –52 (2001). http://dx.doi.org/10.1117/12.449797 PSISDG 0277-786X Google Scholar

17. 

Z. WangA. C. Bovik, “Embedded foveation image coding,” IEEE Trans. Image Process., 10 (10), 1397 –1410 (2001). http://dx.doi.org/10.1109/83.951527 IIPRE4 1057-7149 Google Scholar

18. 

R. C. GonzalezR. E. Woods, Digital Image Processing, 2nd ed.Prentice Hall, Upper Saddle River, NJ (2002). Google Scholar

19. 

Z. ZhuT. S. Huang, Multimodal Surveillance: Sensors, Algorithms, and Systems, 1st ed.Artech House, Norwood, MA (2007). Google Scholar

20. 

N. R. DraperH. Smith, Applied Regression Analysis, Wiley Interscience, River Street, NJ (1998). Google Scholar

Biography

OE_52_7_073104_d001.png

Won Oh Lee received a BS degree in electronics engineering from Dongguk University, Seoul, South Korea, in 2009. He is currently pursuing the combined course of Master and PhD degree in electronics and electrical engineering at Dongguk University. His research interests include biometrics and pattern recognition.

OE_52_7_073104_d002.png

Hwan Heo received the BS degree in computer engineering from National Institute for Lifelong Education, Seoul, South Korea, in 2009. He is currently pursuing the combined course of Master and PhD degree in electronics and electrical engineering at Dongguk University. His research interests include image processing, computer vision, and HCI.

OE_52_7_073104_d003.png

Eui Chul Lee received his BS degree in software in 2005, and his Master and PhD degrees in computer science in 2007 and 2010, respectively, from Sangmyung University, Seoul, South Korea. He is currently an assistant professor in the Department of Computer Science at Sangmyung University. His research interests include computer vision, biometrics, image processing, and HCI.

OE_52_7_073104_d004.png

Kang Ryoung Park received his BS and Master degrees in electronic engineering from Yonsei University, Seoul, Korea, in 1994 and 1996, respectively. He also received his PhD degree in computer vision from the Department of Electrical and Computer Engineering, Yonsei University, in 2000. He was an assistant professor in the Division of Digital Media Technology at Sangmyung University until February 2008. He is currently a professor in the Division of Electronics and Electrical Engineering at Dongguk University. His research interests include computer vision, image processing, and biometrics.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Won Oh Lee, Hwan Heo, Eui Chul Lee, and Kang Ryoung Park "Minimizing eyestrain on a liquid crystal display considering gaze direction and visual field of view," Optical Engineering 52(7), 073104 (9 July 2013). https://doi.org/10.1117/1.OE.52.7.073104
Published: 9 July 2013
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
LCDs

Eye

Eye models

Visualization

Motion models

Video

Wavelets

Back to Top