SlideShare une entreprise Scribd logo
1  sur  4
Télécharger pour lire hors ligne
Contingency Evaluation of Gaze-Contingent Displays for Real-Time Visual Field
                                 Simulations
                                                          Margarita Vinnikov ∗                         Robert S. Allison †
                                                           York University                              York University


Abstract                                                                                           compare how well the GCD can map the center of the simulated
                                                                                                   fovea to the point of regard. However, this might be problematic in
The visual field is the area of space that can be seen when an ob-                                  case the participant can not fixate well or at all. For example, this
server fixates a given point. Many visual capabilities vary with posi-                              type of task can be nearly impossible when the image in a central
tion in the visual field and many diseases result in changes in the vi-                             visual field is missing or is distorted. Also, such techniques rely
sual field. With current technology, it is possible to build very com-                              on estimates of the impact of the GCD on what can be seen (often
plex real-time visual field simulations that employ gaze-contingent                                 foveally or parafoveally where visual acuity and sensitivity often
displays. Nevertheless, there are still no established techniques to                               exceed the capabilities of the display device). Overall, although,
evaluate such systems. We have developed a method to evaluate a                                    the GCD methodology has been practiced for several decades, there
system’s contingency by employing visual blind spot localization as                                are still no established procedures for their evaluation and objective
well as foveal fixation. During the experiment, gaze-contingent and                                 comparison.
static conditions were compared. There was a strong correlation be-
tween predicted results and gaze-contingent trials. This evaluation                                In this paper we will present a new technique to evaluate the con-
method can also be used with patient populations and for the eval-                                 tingency of a GCD system. The technique relies on the assessment
uation of gaze-contingent display systems, when there is need to                                   of the impact of the GCD system on what is normally unseen. Util-
evaluate a visual field outside of the foveal region.                                               ity of the technique was examined in experiments on an exemplar
                                                                                                   GCD system.
CR Categories:        Computing Methodologies [COMPUTER
GRAPHICS]: Methodology and Techniques —Standards; Com-                                             2 Visual Field-Based Contingent Display
puting Methodologies [IMAGE PROCESSING AND COMPUTER                                                  Evaluation
VISION]: Miscellaneous—Image processing software
                                                                                                   When building a gaze-contingent system one has to overcome hard-
Keywords: Head-Eye tracking System, Gaze Contingent, Display,                                      ware limitations and ensure that the inaccuracy and latency of the
Contingency Evaluation                                                                             system do not effect the contingent experience. Furthermore, in
                                                                                                   complex simulations where the entire visual field is modeled, it is
1 Introduction                                                                                     very important to check that the contingency is effective not only in
                                                                                                   the central regions of the visual field but also in the periphery.
Gaze Contingent Displays (GCD) applications are interactive real-
time graphics applications that modify the content of a graphical                                  We have developed a visual-field-based contingent display evalua-
display based on spatial or temporal characteristics of a user’s eye                               tion (VFB-GCE) technique to evaluate the ability of a GCD system
movements. GCD are often used to study the effects of visual                                       to present an arbitrary visual field to a user. The basis for this as-
field characteristics and visual mechanisms [Rayner 1998]. Many                                     sessment method is Goldman’s perimetry test. Goldman’s perime-
early GCD applications were intended to improve effective dis-                                     try test is usually used to detect and determine the location and the
play resolution under limited video bandwidth, rendering or other                                  size of visual scotomas.
constraints. Traditionally, such GCD systems have been evaluated                                   The technique relies on the fact that a GCD application with a prop-
in terms of bandwidth and CPU processing speed improvements.                                       erly modeled visual field can produce comparable perimetry mea-
Standard evaluation also includes latency measures such as gaze                                    surements to those acquired from participants under controlled and
measurement time and impact on refresh rate time. Such measures                                    stable fixation as in clinical tests. In other words, VFB-GCE com-
have the advantage of being easily specified and validated but in                                   pares participants’ psychophysical detection of targets presented in
many cases are not sufficient to describe the impact of the system on                               a gaze-contingent visual field simulation (where the head and eye
perception or task performance. Furthermore, it is very important to                               position is not fixed) and a static perimetry session (where the gaze
asses a GCD in terms of the contingency of the display. Many of the                                locations are carefully controlled and monitored). This technique
existing GCD evaluation techniques rely on a participant’s ability to                              can be applied with visually impaired patients as well as with vi-
fixate at predefined area on a screen. In essence, these techniques                                  sually healthy individuals. With healthy individuals the technique
   ∗ e-mail:   mvinni@cse.yorku.ca
                                                                                                   relies on the localization of the visual blind spot, which is a natural
    † e-mail:allison@cse.yorku.ca                                                                  scotoma common to all humans. Furthermore, the blind spot’s po-
                                                                                                   sition is known or can be easily determined and any stimuli shown
                                                                                                   in this area will not be perceived by an observer. Simple targets can
                                                                                                   be used removing dependencies on display capabilities (relative to
                                                                                                   human acuities) and adding minimal processing load or latency to
Copyright © 2010 by the Association for Computing Machinery, Inc.                                  to the GCD system under test.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for commercial advantage and that copies bear this notice and the full citation on the             3 System Description
first page. Copyrights for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on         We have developed a multi-threaded, real-time GCD system that si-
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
                                                                                                   multaneously tracks the user’s eye movements and head pose. The
permissions@acm.org.                                                                               multi-threaded structure of the system dissociates the eye and head
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             263
trackers from the graphical components of the system thus allow-               should be similar in size and location for both the static and the
ing multi-tasking. This minimizes the delay in data collection from            gaze-contingent cases.
trackers. The system is separated into three primary components:
tracking, virtual environment rendering, and image processing. The             4.1 Procedure
relationships and data flow between the primary components are il-
lustrated in Figure 1. The tracking module is responsible for cal-             The participants were seated in front of a rear-projection screen and
culating the line of sight and point of regard from head and eye               donned the head and eye trackers. After calibration, the participants
coordinates that are received from the tracking devices. This gaze             were asked to fixate the center of the screen before stimulus onset.
information is then used to transform a three-dimensional (spheri-             The experimental procedure consisted of a number of steps. First
cal) visual field into a two-dimensional map in the display space.              the participant was required to fixate a cross at the center of the
The two-dimensional map is essential for creating a realistic visual           screen. Then, the fixation cross was shifted to one of five different
defect simulation. The current system can simulate visual defects              locations on the screen and the participant was required to direct
and characteristics such as visual blur, visual distortions, change in         her/his gaze to the new fixation location (Figure 2). Then during
color values of the image, and glare. In order to ensure fast im-              each trial, a spot of light appeared randomly on the screen in the
age processing, all of these operations are performed at a hardware            vicinity of the blind spot. A second spot of light appeared at the
level. The system also supports simulation of arbitrary visual de-             end of the trial in the area of fixation. After each trial the partici-
fects for experimental and demonstration purposes by allowing pro-             pant had to specify how many spots were detected and in the case
cessing a combination of several basic visual defects. More details            that two light spots were detected, the participants had to compare
can be found in Vinnikov et al. [Vinnikov et al. 2008; Vinnikov                the intensity of the two lights. In addition to the psychophysical
2009].                                                                         data collection, the head and eye positions of the participants were
                                                                               recorded.

                                                                               4.2 Stimuli

                                                                               The stimuli presented to the participants were a series of light spots
                                                                               with intensity of 60 cd/m2 on a gray background with average in-
                                                                               tensity of 3.5 cd/m2 . The size of each light spot was of Gold-
                                                                               man standard size III, which under these experimental settings was
                                                                               0.43◦ . There were 15 test stimuli: 5 placed inside the previously
                                                                               measured blind spot, 5 on the boundary and 5 outside of the blind
                                                                               spot (Figure 2). Each stimulus was presented relative to one of
                                                                               two types of reference point. In the static case, each stimulus was
                                                                               presented relative to the fixation point (a green X). In total, there
                                                                               were 5 fixation point locations: one in the center of the screen and
                                                                               the other four were distributed ± 5◦ horizontally or vertically (Fig-
                                                                               ure 2). Consequently, there were 5 different sets of stimuli for the
                    Figure 1: System structure
                                                                               static case. In the contingent case, there were also 5 fixation points.
                                                                               However, there was only one set of stimuli that was presented dy-
The current head-eye tracking system is based on a video eye track-            namically relative to the participant’s measured gaze position. Each
ing subsystem [EL-MAR inc. 2002] and an acoustic-inertial head                 condition was repeated twice.
tracking subsystem [InterSense 2002]. The head tracker is rigidly
mounted on the eye tracker’s head gear. Both the head tracker and
eye tracker work at a synchronized frequency of 120Hz and data
are sent to the workstation over an RS232 serial ports at a baud-rate
of 38400. The system concurrently collects data from both track-
ers and performs a series of homogeneous transformations in order
to determine the user’s real-time gaze direction and position. For
more details see Huang [2004].
The system was developed for the Linux operating system on a PC
with a 3.2 GHz Pentium 4 processor and 4 GB of RAM. As well all
image processing operations were done with Nvidia GeForce 8800
support.

4   Experimental Setup
                                                                                        Figure 2: Possible fixation and stimuli locations
We have setup an experiment to verify the VFB-GCE technique and
to test our own system’s performance. The independent variable for
this experiment was the display type (static and GCD). During the
static display trials, the participant’s gaze was not tracked except           4.3 Apparatus
to monitor fixation as in standard perimetry. During GCD trials,
eye and head tracking were used and the participant’s gaze direc-              The images were projected on rear projection screens with a Barco
tion was estimated in real time. Our experimental hypothesis was               808 projector with 1024x768x120Hz resolution. The maximum lu-
that the detection rate of a stimuli will be the same in both gaze             minance was 265 cd/m2 and minimum luminance was 0.1cd/m2 .
contingent and static condition, if the the GCD has no perceptual              The participant sat 100 cm away from the screen. A chinrest was
impact. Furthermore, the clinically determined visual blind spot               used to stabilize the head and to maintain viewing distance with


                                                                         264
Figure 3: Results for all four participants


the right or left eye, as appropriate, centered on and at the level of         Table 1: Correlation between expected performance (EXP) and ex-
the center of the screen. Observer’s eye movements were recorded               perimental results for static (ST) and contingent (CON) conditions.
using the tracking subsystem. The experiment was conducted in a                Bold face indicates significant correlations (p<0.05)
darkened room.
                                                                                        Part.     EXP - ST      EXP - CON        ST - CON
4.4   Participants                                                                       TA        0.52           0.81             0.67
                                                                                         OI        0.70           0.76             0.86
                                                                                         DS        0.43           0.78             0.60
Four university students (3 females and 1 male, ranging in age from
                                                                                         IT        0.46           0.63             0.29
20 to 30, average age 25) participated in the study. All partici-
pants had uncorrected distance visual acuity of 20/30 or better and
could see clearly at the viewing distance without glasses. Prior to
the experiment, a series of simple visual tests (visual acuity, color
discrimination, and perimetry) were conducted to ensure that all               between static and contingent conditions (χ2 (2) = 8.44, p = .015),
participants had healthy visual fields. Written informed consent                particularly in the detection rates outside of the blind spot (see Fig-
was obtained from all participants in accordance with a protocol               ure 3). In the static case, participants often failed to detect stimuli
approved by the York University Ethics Board.                                  that were intended to be placed outside of their blind spot. The
                                                                               pattern suggests that the blind spot in the static condition was effec-
                                                                               tively wider in diameter than for the contingent conditions and for
5 Results                                                                      the blind spot determined by Goldman perimetry. For all four par-
                                                                               ticipants the number of detected stimuli outside the blind spot was
Figure 3 shows the results that were obtained the four subjects. The           closer to the expected 100% in contingent cases and smaller for
figure shows the percentage of detected stimuli as a function of lo-            the static case. The correlation between expected performance and
cation either away from the blind spot (expected to be detected),              contingent data is significantly higher than for static case, as can
within the blind spot (expected be never detected), or on the border           be observed from table 1. A Wilcoxon Signed-ranks test indicated
of the blind spot (where detection rates were expected to be inter-            that there was no significant difference between the contingent and
mediate). In all cases stimuli presented within a participants’ blind          expected conditions (Z = -1.65, p = .098), yet there were significant
spots were not detected. However there were significant differences             differences between static and expected (Z = -2.97, p = .003) and


                                                                         265
static and contingent conditions (Z = -3.12, p = .002).                          used methods. We evaluated the technique described in this paper
                                                                                 during stable fixation. However, the technique can also be extended
6    Discussion                                                                  to evaluate dynamic contingency. In other words, the system should
                                                                                 be tested for conditions when the participants are performing differ-
                                                                                 ent types of eye movements such as saccades. Errors due to latency
The highest correlation between the predicted and actual results was
                                                                                 for instance would be expected to give increased localization dis-
for the gaze-contingent conditions. In other words, when compared
                                                                                 crepancies during pursuit eye movements or immediately follow-
with perimetry measurements, the blind spot mapping was more
                                                                                 ing saccades. Furthermore, we intend to conduct experiments with
accurate in contingent trials than in static trials. Further, it appears
                                                                                 clinical populations and use different scotomas as reference for con-
that more stimuli went undetected outside of the mapped blind spot
                                                                                 tingency validation outside of the fovea.
in the static condition than in the contingent condition. Thus, the
blind spot is effectively smaller in diameter and better matched to
predicted size for the contingent trials than for the static trials.             8 Summary
Another experimental hypothesis was that the gaze contingent stim-               In this paper we presented a perimetry-based detection task to eval-
uli should be detected at the same rate as in the static condition. The          uate the effectiveness of a system’s contingency. The method em-
difference between static detection and contingent detection could               ploys visual blind spot localization as well as foveal fixation. Dur-
potentially be explained by two factors - errors in tracking estima-             ing the experimental procedure gaze contingent and static condi-
tion and imprecision in fixation. Errors in tracking included the sys-            tions were compared. The experimental results showed that the
tem’s inaccuracy and latency. There are several contributing factors             stimulus detection rate was more closely correlated with the ex-
to latency and the end-to-end latency using the Barco projector used             pected results for contingent conditions compared to static condi-
is 19.22 - 27.22 ms [Vinnikov 2009]. However, the participants had               tions. The technique relies on localization of natural visual field
to fixate and thus the eye movements during stimulus presentation                 features and could be extended to localize pathological scotomas or
should be very small and errors due to latency should be negligible.             even full visual field maps. This allows considerable flexibility for
In particular, the variability of fixation on average was 0.59 degree             use with patient populations particularly those that cannot foveate
across participants. On the other hand, a potentially more impor-                targets. Furthermore, it can be useful for the evaluation of GCD
tant factor is the quality of the gaze tracking calibration and eye              systems, when the designers are interested in evaluating the visual
tracker estimation errors. It is important to note that errors in track-         field outside of the foveal region.
ing would predict that the blind spot estimation would be worse
under contingent compared to static conditions, which is opposite
to the results.                                                                  References
Since effective blind spot size and the correlation with predictions             EL-MAR INC . 2002. Video Eye-Tracking System, 5.0 ed. EL-
were better in the contingency case than in the static case, we con-               MAR VISION2000, June.
cluded that errors in subject’s fixation were a bigger source of vari-
ability than GCD tracking errors. In other words, difference be-                 H UANG , H. 2004. A Calibrated Combined Head-Eye Tracking
tween the static and contingent performance appeared to be mainly                   System. Master’s thesis, York University.
due to variability in subjects’ fixations, which was compensated for              I NTER S ENSE. 2002. IS-900 Theory of Operation and Specification.
in the GCD case but not in the static case. Once again, we found                    InterSense’s IS-900 VET & VWT.
that average variation in fixation was 0.59 degrees. In the static
case, this would cause misregistration of the presented stimuli with             L OSCHKY, L. C., M C C ONKIE , G. W., YANG , J., AND M ILLER ,
the expected visual field map and produce errors in blind spot lo-                   M. 2005. The limits of visual resolution in natural scene view-
calization, particularly around the boundary of the blind spot.                     ing. Visual Cognition, 1057–1092.
It can be concluded that VFB-GCE technique is a useful method to                 PARKHURST, D., C ULURCIELLO , E., AND N IEBUR , E. 2000.
assess the contingency of the system. Specifically, this methodol-                  Evaluating variable resolution displays with visual search: task
ogy can be useful when peripheral contingency is important, since                  performance and eye movements. 105–109.
it not only relies on foveal fixations but also relies on visual prop-            R AYNER , K. 1998. Eye movements in reading and information
erties of other areas of the visual field such as the blind spot. This               processing: 20 years of research. Psychological Bulletin 124, 3,
can be very important for visual disease simulations as well as for                 372–422.
experiments with peripheral vision, since the method ensures that
a much greater area of the visual field is properly mapped versus                 V INNIKOV, M., A LLISON , R., AND S WIERAD , D. 2008. Real-
only the foveated region in the traditional calibration. VFB-GCE                    time simulation of visual defects with gaze-contingent display.
relies on measurement of detection ability in this objectively known                ETRA, 127–130.
and characterized visual field landmark. The mismatch between the
GCD and clinical maps of such a well-defined and clear landmark                   V INNIKOV, M. 2009. Gaze-Contingent Real-Time Simulation of
reflects errors in contingency. The technique is simple and relies on                Impaired Vision in Naturalistic Displays. Master’s thesis, York
straightforward perceptual judgements that do not depend on the fi-                  University.
delity of the display or require measurement of performance decre-               WATSON , B., WALKER , N., H ODGES , L. F., AND W ORDEN ,
ments on tasks such as visual search. In tasks such as search, sub-               A. 1996. Effectiveness of peripheral level of detail degrada-
jects may have spare capacity and negative effects of errors in con-              tion when used with head-mounted displays. Tech. rep., GVU
tingency may affect task difficulty without affecting performance                  Center, Georgia Institute of Technology.
measures.
                                                                                 WATSON , B., WALKER , N., H ODGES , L. F., AND W ORDEN ,
                                                                                  A. 1997. Managing level of detail through peripheral degra-
7 Future Work                                                                     dation: effects on search performance with a head-mounted dis-
                                                                                  play. ACM Transactions on ComputerHuman Interaction 4, 4,
It will be interesting to compare the methods of GCD evaluation                   323–346.
of visual defect simulation described in this paper with previously


                                                                           266

Contenu connexe

En vedette

Urban transportation system - methods of route assignment
Urban transportation system - methods of route assignmentUrban transportation system - methods of route assignment
Urban transportation system - methods of route assignmentStudent
 
Introduction to urban transportation
Introduction to urban transportationIntroduction to urban transportation
Introduction to urban transportationAli Alraouf
 
Zena cta _ci_30092011
Zena cta _ci_30092011Zena cta _ci_30092011
Zena cta _ci_30092011oscargaliza
 
משחקי מלחמה
משחקי מלחמהמשחקי מלחמה
משחקי מלחמהhaimkarel
 
JudCon - Aerogear Android
JudCon - Aerogear AndroidJudCon - Aerogear Android
JudCon - Aerogear AndroidDaniel Passos
 
TEMA 5A Possessive Adjectives
TEMA 5A Possessive AdjectivesTEMA 5A Possessive Adjectives
TEMA 5A Possessive AdjectivesSenoraAmandaWhite
 
Prk acta intercentos_13062012
Prk acta intercentos_13062012Prk acta intercentos_13062012
Prk acta intercentos_13062012oscargaliza
 
שיעור שתיים מעבד התמלילים
שיעור שתיים   מעבד התמליליםשיעור שתיים   מעבד התמלילים
שיעור שתיים מעבד התמליליםhaimkarel
 
Predstavljanje poslovanja - press konferencija 15.06.12
Predstavljanje poslovanja - press konferencija 15.06.12Predstavljanje poslovanja - press konferencija 15.06.12
Predstavljanje poslovanja - press konferencija 15.06.12TDR d.o.o Rovinj
 
Acordo hesperia illa da toxa
Acordo hesperia illa da toxaAcordo hesperia illa da toxa
Acordo hesperia illa da toxaoscargaliza
 
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...apple1012
 
Galicia sindical 1 maio
Galicia sindical 1 maioGalicia sindical 1 maio
Galicia sindical 1 maiooscargaliza
 
Anexo ás normas, calendario previo (aprobado)
Anexo ás normas, calendario previo  (aprobado)Anexo ás normas, calendario previo  (aprobado)
Anexo ás normas, calendario previo (aprobado)oscargaliza
 
ירושלים מחיר השלום
ירושלים מחיר   השלוםירושלים מחיר   השלום
ירושלים מחיר השלוםhaimkarel
 
M. Holovaty, Концепции автоматизированного тестирования
M. Holovaty, Концепции автоматизированного тестированияM. Holovaty, Концепции автоматизированного тестирования
M. Holovaty, Концепции автоматизированного тестированияAlex
 

En vedette (20)

Urban transportation system - methods of route assignment
Urban transportation system - methods of route assignmentUrban transportation system - methods of route assignment
Urban transportation system - methods of route assignment
 
Introduction to urban transportation
Introduction to urban transportationIntroduction to urban transportation
Introduction to urban transportation
 
Zena cta _ci_30092011
Zena cta _ci_30092011Zena cta _ci_30092011
Zena cta _ci_30092011
 
משחקי מלחמה
משחקי מלחמהמשחקי מלחמה
משחקי מלחמה
 
JudCon - Aerogear Android
JudCon - Aerogear AndroidJudCon - Aerogear Android
JudCon - Aerogear Android
 
Fragments
FragmentsFragments
Fragments
 
TEMA 5A Possessive Adjectives
TEMA 5A Possessive AdjectivesTEMA 5A Possessive Adjectives
TEMA 5A Possessive Adjectives
 
Prk acta intercentos_13062012
Prk acta intercentos_13062012Prk acta intercentos_13062012
Prk acta intercentos_13062012
 
TEMA 1B GRAMMAR ADJECTIVES
TEMA 1B GRAMMAR ADJECTIVESTEMA 1B GRAMMAR ADJECTIVES
TEMA 1B GRAMMAR ADJECTIVES
 
שיעור שתיים מעבד התמלילים
שיעור שתיים   מעבד התמליליםשיעור שתיים   מעבד התמלילים
שיעור שתיים מעבד התמלילים
 
Predstavljanje poslovanja - press konferencija 15.06.12
Predstavljanje poslovanja - press konferencija 15.06.12Predstavljanje poslovanja - press konferencija 15.06.12
Predstavljanje poslovanja - press konferencija 15.06.12
 
Acordo hesperia illa da toxa
Acordo hesperia illa da toxaAcordo hesperia illa da toxa
Acordo hesperia illa da toxa
 
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
akku laptops DE, Laptop akku DE, Notebook akku and laptop adapter, Billige el...
 
Galicia sindical 1 maio
Galicia sindical 1 maioGalicia sindical 1 maio
Galicia sindical 1 maio
 
1 15210 89532
1 15210 895321 15210 89532
1 15210 89532
 
การแบ่งภูมิภาค
การแบ่งภูมิภาคการแบ่งภูมิภาค
การแบ่งภูมิภาค
 
Anexo ás normas, calendario previo (aprobado)
Anexo ás normas, calendario previo  (aprobado)Anexo ás normas, calendario previo  (aprobado)
Anexo ás normas, calendario previo (aprobado)
 
ירושלים מחיר השלום
ירושלים מחיר   השלוםירושלים מחיר   השלום
ירושלים מחיר השלום
 
M. Holovaty, Концепции автоматизированного тестирования
M. Holovaty, Концепции автоматизированного тестированияM. Holovaty, Концепции автоматизированного тестирования
M. Holovaty, Концепции автоматизированного тестирования
 
สมเด็จพระรามาธิบดีที่ 1
สมเด็จพระรามาธิบดีที่ 1สมเด็จพระรามาธิบดีที่ 1
สมเด็จพระรามาธิบดีที่ 1
 

Similaire à Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Visual Field Simulations

Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
 
Validating a smartphone-based pedestrian navigation system prototype - An inf...
Validating a smartphone-based pedestrian navigation system prototype - An inf...Validating a smartphone-based pedestrian navigation system prototype - An inf...
Validating a smartphone-based pedestrian navigation system prototype - An inf...Beniamino Murgante
 
Implementation on Quality of Control for Image Based Control Systems using Al...
Implementation on Quality of Control for Image Based Control Systems using Al...Implementation on Quality of Control for Image Based Control Systems using Al...
Implementation on Quality of Control for Image Based Control Systems using Al...YogeshIJTSRD
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
 
IRJET- Driver Drowsiness Detection System using Computer Vision
IRJET- Driver Drowsiness Detection System using Computer VisionIRJET- Driver Drowsiness Detection System using Computer Vision
IRJET- Driver Drowsiness Detection System using Computer VisionIRJET Journal
 
Gait analysis report
Gait analysis reportGait analysis report
Gait analysis reportconoranthony
 
Schneider.2011.an open source low-cost eye-tracking system for portable real-...
Schneider.2011.an open source low-cost eye-tracking system for portable real-...Schneider.2011.an open source low-cost eye-tracking system for portable real-...
Schneider.2011.an open source low-cost eye-tracking system for portable real-...mrgazer
 
An Eye State Recognition System using Transfer Learning: Inception‑Based Deep...
An Eye State Recognition System using Transfer Learning: Inception‑Based Deep...An Eye State Recognition System using Transfer Learning: Inception‑Based Deep...
An Eye State Recognition System using Transfer Learning: Inception‑Based Deep...IRJET Journal
 
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...IRJET Journal
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneKalle
 
Séminaire IA & VA- Yassine Ruichek, UTBM
Séminaire IA & VA- Yassine Ruichek, UTBMSéminaire IA & VA- Yassine Ruichek, UTBM
Séminaire IA & VA- Yassine Ruichek, UTBMMahdi Zarg Ayouna
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
 
Defect Detection and Classification of Optical Components : A Review
Defect Detection and Classification of Optical Components : A ReviewDefect Detection and Classification of Optical Components : A Review
Defect Detection and Classification of Optical Components : A ReviewIRJET Journal
 
Mc Kenzie An Eye On Input Research Challenges In Using The Eye For Computer I...
Mc Kenzie An Eye On Input Research Challenges In Using The Eye For Computer I...Mc Kenzie An Eye On Input Research Challenges In Using The Eye For Computer I...
Mc Kenzie An Eye On Input Research Challenges In Using The Eye For Computer I...Kalle
 
IRJET- Retinal Health Diagnosis using Image Processing
IRJET- Retinal Health Diagnosis using Image ProcessingIRJET- Retinal Health Diagnosis using Image Processing
IRJET- Retinal Health Diagnosis using Image ProcessingIRJET Journal
 
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...IRJET Journal
 
Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...
Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...
Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...ijtsrd
 
ORAL CANCER DETECTION USING RNN
ORAL CANCER DETECTION USING RNNORAL CANCER DETECTION USING RNN
ORAL CANCER DETECTION USING RNNIRJET Journal
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPIRJET Journal
 

Similaire à Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Visual Field Simulations (20)

Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
 
Validating a smartphone-based pedestrian navigation system prototype - An inf...
Validating a smartphone-based pedestrian navigation system prototype - An inf...Validating a smartphone-based pedestrian navigation system prototype - An inf...
Validating a smartphone-based pedestrian navigation system prototype - An inf...
 
Implementation on Quality of Control for Image Based Control Systems using Al...
Implementation on Quality of Control for Image Based Control Systems using Al...Implementation on Quality of Control for Image Based Control Systems using Al...
Implementation on Quality of Control for Image Based Control Systems using Al...
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
IRJET- Driver Drowsiness Detection System using Computer Vision
IRJET- Driver Drowsiness Detection System using Computer VisionIRJET- Driver Drowsiness Detection System using Computer Vision
IRJET- Driver Drowsiness Detection System using Computer Vision
 
Gait analysis report
Gait analysis reportGait analysis report
Gait analysis report
 
Schneider.2011.an open source low-cost eye-tracking system for portable real-...
Schneider.2011.an open source low-cost eye-tracking system for portable real-...Schneider.2011.an open source low-cost eye-tracking system for portable real-...
Schneider.2011.an open source low-cost eye-tracking system for portable real-...
 
An Eye State Recognition System using Transfer Learning: Inception‑Based Deep...
An Eye State Recognition System using Transfer Learning: Inception‑Based Deep...An Eye State Recognition System using Transfer Learning: Inception‑Based Deep...
An Eye State Recognition System using Transfer Learning: Inception‑Based Deep...
 
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
 
Séminaire IA & VA- Yassine Ruichek, UTBM
Séminaire IA & VA- Yassine Ruichek, UTBMSéminaire IA & VA- Yassine Ruichek, UTBM
Séminaire IA & VA- Yassine Ruichek, UTBM
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
 
Defect Detection and Classification of Optical Components : A Review
Defect Detection and Classification of Optical Components : A ReviewDefect Detection and Classification of Optical Components : A Review
Defect Detection and Classification of Optical Components : A Review
 
Mc Kenzie An Eye On Input Research Challenges In Using The Eye For Computer I...
Mc Kenzie An Eye On Input Research Challenges In Using The Eye For Computer I...Mc Kenzie An Eye On Input Research Challenges In Using The Eye For Computer I...
Mc Kenzie An Eye On Input Research Challenges In Using The Eye For Computer I...
 
IRJET- Retinal Health Diagnosis using Image Processing
IRJET- Retinal Health Diagnosis using Image ProcessingIRJET- Retinal Health Diagnosis using Image Processing
IRJET- Retinal Health Diagnosis using Image Processing
 
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
 
Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...
Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...
Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...
 
ORAL CANCER DETECTION USING RNN
ORAL CANCER DETECTION USING RNNORAL CANCER DETECTION USING RNN
ORAL CANCER DETECTION USING RNN
 
F0932733
F0932733F0932733
F0932733
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
 

Plus de Kalle

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Kalle
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementKalle
 

Plus de Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head Movement
 

Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Visual Field Simulations

  • 1. Contingency Evaluation of Gaze-Contingent Displays for Real-Time Visual Field Simulations Margarita Vinnikov ∗ Robert S. Allison † York University York University Abstract compare how well the GCD can map the center of the simulated fovea to the point of regard. However, this might be problematic in The visual field is the area of space that can be seen when an ob- case the participant can not fixate well or at all. For example, this server fixates a given point. Many visual capabilities vary with posi- type of task can be nearly impossible when the image in a central tion in the visual field and many diseases result in changes in the vi- visual field is missing or is distorted. Also, such techniques rely sual field. With current technology, it is possible to build very com- on estimates of the impact of the GCD on what can be seen (often plex real-time visual field simulations that employ gaze-contingent foveally or parafoveally where visual acuity and sensitivity often displays. Nevertheless, there are still no established techniques to exceed the capabilities of the display device). Overall, although, evaluate such systems. We have developed a method to evaluate a the GCD methodology has been practiced for several decades, there system’s contingency by employing visual blind spot localization as are still no established procedures for their evaluation and objective well as foveal fixation. During the experiment, gaze-contingent and comparison. static conditions were compared. There was a strong correlation be- tween predicted results and gaze-contingent trials. This evaluation In this paper we will present a new technique to evaluate the con- method can also be used with patient populations and for the eval- tingency of a GCD system. The technique relies on the assessment uation of gaze-contingent display systems, when there is need to of the impact of the GCD system on what is normally unseen. Util- evaluate a visual field outside of the foveal region. ity of the technique was examined in experiments on an exemplar GCD system. CR Categories: Computing Methodologies [COMPUTER GRAPHICS]: Methodology and Techniques —Standards; Com- 2 Visual Field-Based Contingent Display puting Methodologies [IMAGE PROCESSING AND COMPUTER Evaluation VISION]: Miscellaneous—Image processing software When building a gaze-contingent system one has to overcome hard- Keywords: Head-Eye tracking System, Gaze Contingent, Display, ware limitations and ensure that the inaccuracy and latency of the Contingency Evaluation system do not effect the contingent experience. Furthermore, in complex simulations where the entire visual field is modeled, it is 1 Introduction very important to check that the contingency is effective not only in the central regions of the visual field but also in the periphery. Gaze Contingent Displays (GCD) applications are interactive real- time graphics applications that modify the content of a graphical We have developed a visual-field-based contingent display evalua- display based on spatial or temporal characteristics of a user’s eye tion (VFB-GCE) technique to evaluate the ability of a GCD system movements. GCD are often used to study the effects of visual to present an arbitrary visual field to a user. The basis for this as- field characteristics and visual mechanisms [Rayner 1998]. Many sessment method is Goldman’s perimetry test. Goldman’s perime- early GCD applications were intended to improve effective dis- try test is usually used to detect and determine the location and the play resolution under limited video bandwidth, rendering or other size of visual scotomas. constraints. Traditionally, such GCD systems have been evaluated The technique relies on the fact that a GCD application with a prop- in terms of bandwidth and CPU processing speed improvements. erly modeled visual field can produce comparable perimetry mea- Standard evaluation also includes latency measures such as gaze surements to those acquired from participants under controlled and measurement time and impact on refresh rate time. Such measures stable fixation as in clinical tests. In other words, VFB-GCE com- have the advantage of being easily specified and validated but in pares participants’ psychophysical detection of targets presented in many cases are not sufficient to describe the impact of the system on a gaze-contingent visual field simulation (where the head and eye perception or task performance. Furthermore, it is very important to position is not fixed) and a static perimetry session (where the gaze asses a GCD in terms of the contingency of the display. Many of the locations are carefully controlled and monitored). This technique existing GCD evaluation techniques rely on a participant’s ability to can be applied with visually impaired patients as well as with vi- fixate at predefined area on a screen. In essence, these techniques sually healthy individuals. With healthy individuals the technique ∗ e-mail: mvinni@cse.yorku.ca relies on the localization of the visual blind spot, which is a natural † e-mail:allison@cse.yorku.ca scotoma common to all humans. Furthermore, the blind spot’s po- sition is known or can be easily determined and any stimuli shown in this area will not be perceived by an observer. Simple targets can be used removing dependencies on display capabilities (relative to human acuities) and adding minimal processing load or latency to Copyright © 2010 by the Association for Computing Machinery, Inc. to the GCD system under test. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the 3 System Description first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on We have developed a multi-threaded, real-time GCD system that si- servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail multaneously tracks the user’s eye movements and head pose. The permissions@acm.org. multi-threaded structure of the system dissociates the eye and head ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 263
  • 2. trackers from the graphical components of the system thus allow- should be similar in size and location for both the static and the ing multi-tasking. This minimizes the delay in data collection from gaze-contingent cases. trackers. The system is separated into three primary components: tracking, virtual environment rendering, and image processing. The 4.1 Procedure relationships and data flow between the primary components are il- lustrated in Figure 1. The tracking module is responsible for cal- The participants were seated in front of a rear-projection screen and culating the line of sight and point of regard from head and eye donned the head and eye trackers. After calibration, the participants coordinates that are received from the tracking devices. This gaze were asked to fixate the center of the screen before stimulus onset. information is then used to transform a three-dimensional (spheri- The experimental procedure consisted of a number of steps. First cal) visual field into a two-dimensional map in the display space. the participant was required to fixate a cross at the center of the The two-dimensional map is essential for creating a realistic visual screen. Then, the fixation cross was shifted to one of five different defect simulation. The current system can simulate visual defects locations on the screen and the participant was required to direct and characteristics such as visual blur, visual distortions, change in her/his gaze to the new fixation location (Figure 2). Then during color values of the image, and glare. In order to ensure fast im- each trial, a spot of light appeared randomly on the screen in the age processing, all of these operations are performed at a hardware vicinity of the blind spot. A second spot of light appeared at the level. The system also supports simulation of arbitrary visual de- end of the trial in the area of fixation. After each trial the partici- fects for experimental and demonstration purposes by allowing pro- pant had to specify how many spots were detected and in the case cessing a combination of several basic visual defects. More details that two light spots were detected, the participants had to compare can be found in Vinnikov et al. [Vinnikov et al. 2008; Vinnikov the intensity of the two lights. In addition to the psychophysical 2009]. data collection, the head and eye positions of the participants were recorded. 4.2 Stimuli The stimuli presented to the participants were a series of light spots with intensity of 60 cd/m2 on a gray background with average in- tensity of 3.5 cd/m2 . The size of each light spot was of Gold- man standard size III, which under these experimental settings was 0.43◦ . There were 15 test stimuli: 5 placed inside the previously measured blind spot, 5 on the boundary and 5 outside of the blind spot (Figure 2). Each stimulus was presented relative to one of two types of reference point. In the static case, each stimulus was presented relative to the fixation point (a green X). In total, there were 5 fixation point locations: one in the center of the screen and the other four were distributed ± 5◦ horizontally or vertically (Fig- ure 2). Consequently, there were 5 different sets of stimuli for the Figure 1: System structure static case. In the contingent case, there were also 5 fixation points. However, there was only one set of stimuli that was presented dy- The current head-eye tracking system is based on a video eye track- namically relative to the participant’s measured gaze position. Each ing subsystem [EL-MAR inc. 2002] and an acoustic-inertial head condition was repeated twice. tracking subsystem [InterSense 2002]. The head tracker is rigidly mounted on the eye tracker’s head gear. Both the head tracker and eye tracker work at a synchronized frequency of 120Hz and data are sent to the workstation over an RS232 serial ports at a baud-rate of 38400. The system concurrently collects data from both track- ers and performs a series of homogeneous transformations in order to determine the user’s real-time gaze direction and position. For more details see Huang [2004]. The system was developed for the Linux operating system on a PC with a 3.2 GHz Pentium 4 processor and 4 GB of RAM. As well all image processing operations were done with Nvidia GeForce 8800 support. 4 Experimental Setup Figure 2: Possible fixation and stimuli locations We have setup an experiment to verify the VFB-GCE technique and to test our own system’s performance. The independent variable for this experiment was the display type (static and GCD). During the static display trials, the participant’s gaze was not tracked except 4.3 Apparatus to monitor fixation as in standard perimetry. During GCD trials, eye and head tracking were used and the participant’s gaze direc- The images were projected on rear projection screens with a Barco tion was estimated in real time. Our experimental hypothesis was 808 projector with 1024x768x120Hz resolution. The maximum lu- that the detection rate of a stimuli will be the same in both gaze minance was 265 cd/m2 and minimum luminance was 0.1cd/m2 . contingent and static condition, if the the GCD has no perceptual The participant sat 100 cm away from the screen. A chinrest was impact. Furthermore, the clinically determined visual blind spot used to stabilize the head and to maintain viewing distance with 264
  • 3. Figure 3: Results for all four participants the right or left eye, as appropriate, centered on and at the level of Table 1: Correlation between expected performance (EXP) and ex- the center of the screen. Observer’s eye movements were recorded perimental results for static (ST) and contingent (CON) conditions. using the tracking subsystem. The experiment was conducted in a Bold face indicates significant correlations (p<0.05) darkened room. Part. EXP - ST EXP - CON ST - CON 4.4 Participants TA 0.52 0.81 0.67 OI 0.70 0.76 0.86 DS 0.43 0.78 0.60 Four university students (3 females and 1 male, ranging in age from IT 0.46 0.63 0.29 20 to 30, average age 25) participated in the study. All partici- pants had uncorrected distance visual acuity of 20/30 or better and could see clearly at the viewing distance without glasses. Prior to the experiment, a series of simple visual tests (visual acuity, color discrimination, and perimetry) were conducted to ensure that all between static and contingent conditions (χ2 (2) = 8.44, p = .015), participants had healthy visual fields. Written informed consent particularly in the detection rates outside of the blind spot (see Fig- was obtained from all participants in accordance with a protocol ure 3). In the static case, participants often failed to detect stimuli approved by the York University Ethics Board. that were intended to be placed outside of their blind spot. The pattern suggests that the blind spot in the static condition was effec- tively wider in diameter than for the contingent conditions and for 5 Results the blind spot determined by Goldman perimetry. For all four par- ticipants the number of detected stimuli outside the blind spot was Figure 3 shows the results that were obtained the four subjects. The closer to the expected 100% in contingent cases and smaller for figure shows the percentage of detected stimuli as a function of lo- the static case. The correlation between expected performance and cation either away from the blind spot (expected to be detected), contingent data is significantly higher than for static case, as can within the blind spot (expected be never detected), or on the border be observed from table 1. A Wilcoxon Signed-ranks test indicated of the blind spot (where detection rates were expected to be inter- that there was no significant difference between the contingent and mediate). In all cases stimuli presented within a participants’ blind expected conditions (Z = -1.65, p = .098), yet there were significant spots were not detected. However there were significant differences differences between static and expected (Z = -2.97, p = .003) and 265
  • 4. static and contingent conditions (Z = -3.12, p = .002). used methods. We evaluated the technique described in this paper during stable fixation. However, the technique can also be extended 6 Discussion to evaluate dynamic contingency. In other words, the system should be tested for conditions when the participants are performing differ- ent types of eye movements such as saccades. Errors due to latency The highest correlation between the predicted and actual results was for instance would be expected to give increased localization dis- for the gaze-contingent conditions. In other words, when compared crepancies during pursuit eye movements or immediately follow- with perimetry measurements, the blind spot mapping was more ing saccades. Furthermore, we intend to conduct experiments with accurate in contingent trials than in static trials. Further, it appears clinical populations and use different scotomas as reference for con- that more stimuli went undetected outside of the mapped blind spot tingency validation outside of the fovea. in the static condition than in the contingent condition. Thus, the blind spot is effectively smaller in diameter and better matched to predicted size for the contingent trials than for the static trials. 8 Summary Another experimental hypothesis was that the gaze contingent stim- In this paper we presented a perimetry-based detection task to eval- uli should be detected at the same rate as in the static condition. The uate the effectiveness of a system’s contingency. The method em- difference between static detection and contingent detection could ploys visual blind spot localization as well as foveal fixation. Dur- potentially be explained by two factors - errors in tracking estima- ing the experimental procedure gaze contingent and static condi- tion and imprecision in fixation. Errors in tracking included the sys- tions were compared. The experimental results showed that the tem’s inaccuracy and latency. There are several contributing factors stimulus detection rate was more closely correlated with the ex- to latency and the end-to-end latency using the Barco projector used pected results for contingent conditions compared to static condi- is 19.22 - 27.22 ms [Vinnikov 2009]. However, the participants had tions. The technique relies on localization of natural visual field to fixate and thus the eye movements during stimulus presentation features and could be extended to localize pathological scotomas or should be very small and errors due to latency should be negligible. even full visual field maps. This allows considerable flexibility for In particular, the variability of fixation on average was 0.59 degree use with patient populations particularly those that cannot foveate across participants. On the other hand, a potentially more impor- targets. Furthermore, it can be useful for the evaluation of GCD tant factor is the quality of the gaze tracking calibration and eye systems, when the designers are interested in evaluating the visual tracker estimation errors. It is important to note that errors in track- field outside of the foveal region. ing would predict that the blind spot estimation would be worse under contingent compared to static conditions, which is opposite to the results. References Since effective blind spot size and the correlation with predictions EL-MAR INC . 2002. Video Eye-Tracking System, 5.0 ed. EL- were better in the contingency case than in the static case, we con- MAR VISION2000, June. cluded that errors in subject’s fixation were a bigger source of vari- ability than GCD tracking errors. In other words, difference be- H UANG , H. 2004. A Calibrated Combined Head-Eye Tracking tween the static and contingent performance appeared to be mainly System. Master’s thesis, York University. due to variability in subjects’ fixations, which was compensated for I NTER S ENSE. 2002. IS-900 Theory of Operation and Specification. in the GCD case but not in the static case. Once again, we found InterSense’s IS-900 VET & VWT. that average variation in fixation was 0.59 degrees. In the static case, this would cause misregistration of the presented stimuli with L OSCHKY, L. C., M C C ONKIE , G. W., YANG , J., AND M ILLER , the expected visual field map and produce errors in blind spot lo- M. 2005. The limits of visual resolution in natural scene view- calization, particularly around the boundary of the blind spot. ing. Visual Cognition, 1057–1092. It can be concluded that VFB-GCE technique is a useful method to PARKHURST, D., C ULURCIELLO , E., AND N IEBUR , E. 2000. assess the contingency of the system. Specifically, this methodol- Evaluating variable resolution displays with visual search: task ogy can be useful when peripheral contingency is important, since performance and eye movements. 105–109. it not only relies on foveal fixations but also relies on visual prop- R AYNER , K. 1998. Eye movements in reading and information erties of other areas of the visual field such as the blind spot. This processing: 20 years of research. Psychological Bulletin 124, 3, can be very important for visual disease simulations as well as for 372–422. experiments with peripheral vision, since the method ensures that a much greater area of the visual field is properly mapped versus V INNIKOV, M., A LLISON , R., AND S WIERAD , D. 2008. Real- only the foveated region in the traditional calibration. VFB-GCE time simulation of visual defects with gaze-contingent display. relies on measurement of detection ability in this objectively known ETRA, 127–130. and characterized visual field landmark. The mismatch between the GCD and clinical maps of such a well-defined and clear landmark V INNIKOV, M. 2009. Gaze-Contingent Real-Time Simulation of reflects errors in contingency. The technique is simple and relies on Impaired Vision in Naturalistic Displays. Master’s thesis, York straightforward perceptual judgements that do not depend on the fi- University. delity of the display or require measurement of performance decre- WATSON , B., WALKER , N., H ODGES , L. F., AND W ORDEN , ments on tasks such as visual search. In tasks such as search, sub- A. 1996. Effectiveness of peripheral level of detail degrada- jects may have spare capacity and negative effects of errors in con- tion when used with head-mounted displays. Tech. rep., GVU tingency may affect task difficulty without affecting performance Center, Georgia Institute of Technology. measures. WATSON , B., WALKER , N., H ODGES , L. F., AND W ORDEN , A. 1997. Managing level of detail through peripheral degra- 7 Future Work dation: effects on search performance with a head-mounted dis- play. ACM Transactions on ComputerHuman Interaction 4, 4, It will be interesting to compare the methods of GCD evaluation 323–346. of visual defect simulation described in this paper with previously 266