Prior publications have shown that ideal observer models provide a good estimate of measured d' values for varying noise amplitude and target strength after allowing for observer internal noise and human efficiency. To provide a consistent estimate of visual performance in general applications, the internal noise and human efficiency should either be fixed values or calculable based on experimental conditions. In the current study, we test observer models for several sizes of three types of targets (rectangular, Gaussian, or Gabor) at two uniform background luminances and three levels of added Gaussian noise. The ideal observer predictions for each individual experimental condition are well correlated with measured d' values (r2 > 0.90 in most cases); however, the required internal noise and human efficiency vary substantially with target and luminance. A modified ideal observer, which includes a luminance-dependent eye filter and Gabor channels, is developed to simultaneously account for the measured d' values in all experimental conditions with r2 = 0.88. This observer model can be used to estimate general target detectability in flat two-dimensional image areas.
In order to develop a human vision model to simulate both grating detection and brightness perception, we have chosen four visual functional components. They include a front-end low-pass filter, a cone-type dependent local compressive nonlinearity described by a modified Naka-Rushton equation, a cortical representation of the image in the Fourier domain, and a frequency dependent compressive nonlinearity. The model outputs were fitted to contrast sensitivity functions over 7 mean illuminance levels ranging from 0.0009 to 900 trolands simultaneously with a set of 6 free parameters. The fits account for 97.8% of the total variance in the reported experimental data. Furthermore, the same model was used to simulate contrast and brightness perception. Some visual patterns that can produce simultaneous contrast or crispening effect were used as input images to the model. The outputs are consistent with the perceived brightness, using the same set of parameter values that was used in the above-mentioned fits. The model also simulated the perceived contrast contours on seeing a frequency-modulated grating and the whiteness percepts at different adaptation levels. In conclusion, a model that is based on simple visual properties is promising for deriving a unified model of pattern detection and brightness perception.
To characterize how the human visual system responds to spatial patterns, a 'black box' method was adopted, in which visual evoked potentials (VEPs) were taken as outputs and visual patterned stimuli were taken as inputs. The stimuli were gratings whose function profiles were weighted Hermite polynomials (WHPs). A model, mathematically analogous to harmonic oscillators in quantum mechanics, was developed to describe the black box and the quantitative relationships between the VEPs and the WHPs.
Invariant perception of objects is desirable. Contrast constancy assures invariant appearance of suprathreshold image features as they change their distance from the observer. Fully robust size invariance also requires equal contrast thresholds across all spatial frequencies and eccentricities so that near-threshold image features do not appear or disappear with distance changes. This clearly is not the case, since contrast thresholds increase exponentially with eccentricity. We showed that a less stringent constraint actually may be realized. Angular size and eccentricity of image features covary with distance changes. Thus the threshold requirement for invariance could be approximately satisfied if contrast thresholds were to vary as the product of spatial frequency and eccentricity from the fovea. Measurements of observers' orientation discrimination contrast thresholds fit this model well over spatial frequencies of 1 - 16 cycles/degree and for retinal eccentricities up to 23 degrees. Measurements of observers’ contrast detection thresholds from three different studies provided an even better fit to this model over even wider spatial frequency and retinal eccentricity ranges. The fitting variable, die fundamental eccentricity constant, was similar for all three studies (0.036, 0.036, 0.030, respectively). The eccentricity constant for the orientation discrimination thresholds was higher (0.048 and 0.050 for two observers, respectively). We simulated the appearance of images with a nonuniform visual system by applying the proper threshold at each eccentricity and spatial frequency. The images exhibited only small changes over a simulated 4-octave distance range. However, the change in simulated appearance over the same distance range was dramatic for patients with central visual field loss. The changes of appearance across the image as a function of eccentricity were much smaller than in previous simulations, which used data derived from visual cortex anatomy rather than direct measurements of visual function. Our model provides a new tool for analyzing the visibility of displays and for designing equal visibility or various visibility displays.