KEYWORDS: Eye, Surface plasmons, Visualization, Cognitive modeling, Eye models, Visual process modeling, Control systems, Human vision and color perception, Retina, Information visualization
This study quantitatively compares eye movement (EM) patterns of subjects viewing static pictures for short period of time. Each image stimulus is viewed in its original appearance and under a linear transformation - rotation by 180 degrees. Eye movements for the original and transformed images are compared in terms of similarity in position of fixations (SP factor) and their sequence (SS factor). The stimuli come from four distinct groups. First group contains pseudo-natural images that have typical natural image Fourier power distribution (1/f) and random phase factor. This creates a cloud-like pattern without any particular shape outlines. Second group contains single objects that might appear in the environment in almost any orientation. Such objects do not possess any intrinsic polarity such as up and down (for example a bundle of keys). The third group contains single object with well- defined polarity (for example a tree). Finally, the fourth category contains scenes with multiple objects and well- defined polarity (for example a picture of a room). We investigate the effects of the transformation for each category on EM pattern and evaluate similarity of viewing strategies for individual subjects.
Eye movements are an important aspect of human visual behavior. The temporal and space-variant nature of sampling a visual scenes requires frequent attentional gaze shifts, saccades, to fixate onto different parts of an image. Fixations are often directed towards the most informative regions in the visual scene. We introduce a model and its simulation that can select such regions based on prior knowledge of similar scenes. Having representations of scene categories as probabilistic combination of hypothetical objects, i.e., prototypical regions with certain properties, it is possible to assess the likely contribution of each image region to the successive recognition process. The regions are obtained by segmenting low-resolution images using the normalized cut algorithm. Based on low-level features, such as average color, size, position, regions are clustered into a small set of hypothetical objects. Using conditions probabilities for each object given the scene category, the model can then predict the informative value of the corresponding region and initiate a sequential spatial information-gathering algorithm analogous to an eye movement saccade to a new fixation. The article demonstrates how the initial hypothesis determines the next region of interest to visit and how these scene hypotheses are affected by sequentially visiting each new image region.
KEYWORDS: Visualization, Visual process modeling, Brain, Eye, Visual cortex, Human vision and color perception, Cameras, Neuroimaging, Image processing, 3D modeling
An eye movements sequence, or scanpath, during viewing of a stationary stimulus has been described as a set of fixations onto regions-of-interest, ROIs, and the saccades or transitions between them. Such scanpaths have high similarity for the same subject and stimulus both in the spatial loci of the ROIs and their sequence; scanpaths also take place during recollection of a previously viewed stimulus, suggesting that they play a similar role in visual memory and recall.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.