This paper on multi-modal continuous identity authentication contains four main sections. The first section describes the security issue we are addressing with the use of continuous identity authentication and describes the research our laboratory has been doing with different types of passive physiological sensors and how the multi-modal sensor data can be applied to continuous identity authentication. The second section describes a pilot study measuring temperature, GSR, eye movement, blood flow and click pressure of thirteen subjects performing a computer task. The third section gives preliminary results that show continuous authentication of identity above 80 percent was possible using discriminant analysis with a limited set of all of the measures for all but two subjects. The fourth section discusses the results and the potential of continuous identity authentication.
A computer controlled stereoscopic camera system that produces precise and rapid changes of camera orientation and lens parameters is described and assessed. The system consists of a pair of cameras, each attached to a lens with a computer controlled zoom, focus and aperture. Cameras, at right angle to each other, are aimed through a half-silvered mirror to acquire the left and right images. Each camera is mounted on a motorized base that controls the camera separation and convergence angle. The computer controls camera separation from zero to 45 cm, with an accuracy of 0.1 cm, and the convergence angle of each camera from +/- 15 degrees off- center, with an accuracy of 0.02 degrees. Subjects viewed 27 conditions on a stereo monitor system where camera and target parameters were changed. There were three levels of convergence angle, camera separation and target intensity creating 27 viewing conditions. The subjects viewed all conditions and made depth judgments between four pairs of point source lights. Depth judgement results indicate that direct and remote views are consistent, subjects produce consistent judgments despite non-orthoscopic intervening camera configurations and judgments are consistent with varying system parameters.
Predictions of task performance based on the information required by the task, visual information acquired from the source, information transmission channel characteristics, and human information processing limitations are compared to actual performance on tasks viewed directly or remotely either monoscopically or stereoscopically, under different motion conditions. The tasks require varying amounts of information and channel capacity for proficient task completion and are based on the rapid sequential positioning task. The rapid sequential positioning task measures the time a subject takes to locate and tap an illuminated point source light target with a probe. Performance was measured using the task in a 3D and 3D plus motion configurations. The 3D plus motion configurations were given to subjects at four different movement speeds under different viewing conditions to test the effects of changing viewing bandwidth requirements. Subjects performed all tasks in a single session with data collected by computer. Data analysis involved the comparison of actual results with predictions derived from the Model Human Processor model and information theory. Results indicate that the requirements, availability, transmission, and human processing limitations of information are key components to task performance.
In teleoperation, non-orthoscopic views are often obtained by changing camera distance, lens focal length and intercamera separation to settings that deviate from those required to produce orthoscopic views. Distortions caused by this distant perspective can have an impact on the perceptions and the performance of tasks in the work space. This study uses the rapid sequential positioning task (RSP) to investigate differences in performance using stereoscopic and monoscopic remote views that are either orthoscopic in terms of camera/lens configuration or are obtained from cameras located at four times their orthoscopic position. At this distant perspective orthoscopic image size is maintained by adjusting lens focal length while comparable disparities are maintained by adjusting the intercamera separation. Although in the distant perspective (or non- orthoscopic) objects on the horopter plane are the same size as in the orthoscopic view, object ahead or behind the horopter plane are not. Time scores were recorded from four subjects performing the RSP task under four viewing conditions: monoscopic/orthoscopic, monoscopic/non-orthoscopic, stereoscopic/orthoscopic, stereo/non-orthoscopic. A two by two ANOVA statistical analysis was performed on the data. Results of this study did not reveal a degradation in performance when moving away from the orthoscopic view to the distant perspective for both monoscopic and stereoscopic view, although stereo was significantly superior.
In the ideal orthostereoscopic viewing system the geometric relationship between the manipulator arm and cameras is designed to product a close correspondence between the operator's actual and imaged hand-to-eye position. This correspondence often cannot be maintained because of the physical design constraints of manipulator, cameras, or mounting structure. Cameras mounted in a non-corresponding position, in relation to the operator's hand- to-eye position, create a visual-motor mismatch. In this study the rapid sequential positioning (RSP) task is used to measure manipulator performance under two levels of visual-motor correspondence. Performance was measured by (1) taking a pure perceptual measure, (2) taking total time to complete a task, (3) measuring various types of errors, and (4) number of perfect and near perfect task completions. One group viewed a scene in which there was a visual-motor correspondence and the other group viewed a noncorresponding scene, in which the cameras were shifted 30 degrees clockwise from the orthoscopic position. Each group performs the RSP task under four visual conditions. Those four visual conditions are monoscopic stationary, monoscopic with motion parallax, stereo stationary, and stereo with motion parallax. The performance of groups under different views was compared to determine the effect of visual-motor noncorrespondence.
A low cost helmet-mounted stereoscopic color viewing system designed for field testing teleoperator tasks is described. A stereo camera pair was mounted on a helmet to allow testing of a helmet-mounted display with real time video input. The display consisted of a pair of LCD color monitors viewed through a modified Wheatstone mirror system. The components were arranged on a stable platform that was attached to a hard plastic helmet. The helmet weight (9.5 pounds) was supported by a modified backpack. This backpack also contained support electronics and batteries. Design, construction, and evaluation tests of this viewing system are discussed.
The efficacy of using point source lights to measure depth perception under remote view is evaluated. A Howard-Dolman type apparatus in which the depth plane is represented by either traditional rods or by point source lights is used. Ten operators, half viewing rods and half viewing lights, were asked to give depth scaling and stereoacuity judgments under four display conditions: (1) 2-D static, (2) 2-D motion parallax, (3) 3-D static, (4) 3-D motion parallax. The pattern of both stereoacuity and depth scaling responses was similar for rods and lights across the four conditions. Stereoacuity was significantly improved under 3-D as compared to 2-D view and under motion parallax as compared to static view for both lights and rods. When viewing lights but not rods a combination of motion parallax with disparity cues produced further improvements in stereoacuity. This pattern of results was similar for depth scaling, but differences were not significant. The accuracy of both stereoacuity and depth scaling judgments decreased when point source lights, as compared to rods, were viewed. These results show that point source lights produce valid measures of depth perception and contain fewer non-disparity cues than traditional Howard-Dolman rods.
The virtual window display is a hybrid of the head-coupled, helmet-mounted display and the fixed CRT mounted on a tabletop. Moving the CRTs from the operator''s head retains the benefits of motion parallax while providing higher quality color images, greater comfort and fewer restrictions on the operator''s view of the control site. A prototype virtual window display was constructed with direct mechanical linkage to servo camera movements to the operator''s head motions. This apparatus was used to compare remote performance with and without motion parallax, paired with either stereoscopic or monoscopic views. Mean stereoacuity and depth scaling responses for six observers of a Howard-Dolman apparatus showed improved performance when motion parallax accompanied monoscopic, but not stereoscopic view. Mean performance times for six observers retrieving objects from a wire maze show similar, though not significant, improvement when motion parallax accompanies monoscopic view. Observers report that manipulator requirements for hand steadiness reduce the opportunity to get depth information from head movements. The use of motion parallax information in a monoscopic virtual window display can improve teleoperator performance.