Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
Visual attention
Presentation
Computational
models
Performance
Conclusion
Saccadic model
of eye ...
Saccadic model
O. Le Meur
References
References
P. J. Bennett and J. Pratt. The spatial distribution of inhibition of retu...
Saccadic model
O. Le Meur
References
L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of...
Saccadic model
O. Le Meur
References
L. W. Stark and S. R. Ellis. Scanpaths revisited: cognitive models direct active look...
Prochain SlideShare
Chargement dans…5
×

Saccadic model of eye movements for free-viewing condition

1 036 vues

Publié le

O. Le Meur and Z. Liu, Saccadic model of eye movements for free-viewing condition, Vision Research, 2015.

We propose a new framework to predict visual scanpaths of observers while they freely watch a visual scene. The visual fixations are inferred from bottom-up saliency and several oculomotor biases. Bottom-up saliency is represented by a saliency map whereas the oculomotor biases (saccade amplitudes and saccade orientations) are modeled using public eye tracking datasets. Our experiments show that the simulated scanpaths exhibit similar trends of human eye movements in a free-viewing condition. The generated scanpaths are more similar to human scanpaths than those generated by two existing methods. In addition, we show that computing saliency maps from simulated visual scanpaths allows to outperform existing saliency models.

Publié dans : Ingénierie
  • Soyez le premier à commenter

Saccadic model of eye movements for free-viewing condition

  1. 1. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Saccadic model of eye movements for free-viewing condition Olivier Le Meur 1 Zhi Liu 1,2 olemeur@irisa.fr 1 IRISA - University of Rennes 1, France 2 School of Communication and Information Engineering, Shanghai University, China April 28, 2015 1 / 52
  2. 2. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Preamble The following slides rely heavily upon the following paper: ª O. Le Meur & Z. Liu, Saccadic model of eye movements for free-viewing condition, Vision Research, 2015, doi:10.1016/j.visres.2014.12.026. Acknowledgments: This work is supported in part by a Marie Curie International Incoming Fellowship within the 7th European Community Framework Programme under Grant Nos. 299202 and 911202, and in part by the National Natural Science Foundation of China under Grant Nos. 61171144 and 61471230. 2 / 52
  3. 3. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Outline 1 Visual attention 2 Saccadic model of eye movement 3 Conclusion & Perspective 3 / 52
  4. 4. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Visual Attention 1 Visual attention Presentation Computational models Performance Conclusion 4 / 52
  5. 5. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Introduction to visual attention (1/3) Natural visual scenes are cluttered and contain many different objects that cannot all be processed simultaneously. Where is Waldo, the young boy wearing the red-striped shirt... Amount of information coming down the optic nerve 108 − 109 bits per second Far exceeds what the brain is capable of processing... 5 / 52
  6. 6. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Introduction to visual attention (2/3) WE DO NOT SEE EVERYTHING AROUND US!!! Visual attention Posner proposed the following definition (Posner, 1980). Visual atten- tion is used: ª to select important areas of our visual field (alerting); ª to search a target in cluttered scenes (searching). There are several kinds of visual attention: ª Overt visual attention: involving eye movements; ª Covert visual attention: without eye movements (Covert fixations are not observable). 6 / 52
  7. 7. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Introduction to visual attention (3/3) Bottom-Up vs Top-Down ª Bottom-Up: some things draw attention reflexively, in a task-independent way (Involuntary, Very quick, Unconscious); ª Top-Down: some things draw volitional attention, in a task-dependent way (Voluntary; Very slow; Conscious). 7 / 52
  8. 8. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Introduction to visual attention (3/3) Bottom-Up vs Top-Down ª Bottom-Up: some things draw attention reflexively, in a task-independent way (Involuntary, Very quick, Unconscious); ª Top-Down: some things draw volitional attention, in a task-dependent way (Voluntary; Very slow; Conscious). Computational models of Bottom-up overt visual attention 7 / 52
  9. 9. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Computational models of Bottom-up visual attention (1/3) Most of the computation models of visual attention have been motivated by the seminal work of Koch and Ullmann (Koch and Ullman, 1985). ª a plausible computational architecture to predict our gaze; ª a set of feature maps are processed in a massively parallel manner; ª a single topographic saliency map. 8 / 52
  10. 10. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Computational models of Bottom-up visual attention (2/3) Taxonomy of models: ª Information Theoretic models; ª Cognitive models; ª Graphical models; ª Spectral analysis models; ª Pattern classification models; ª Bayesian models. Extracted from (Borji and Itti, 2013). 9 / 52
  11. 11. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Computational models of Bottom-up visual attention (3/3) Cognitive models: as faithful as possible to the Human Visual System (HVS) ª inspired by cognitive concepts; ª based on the HVS properties. Extracted from (Borji and Itti, 2013). 10 / 52
  12. 12. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Performance on still images (1/3) The requirement of ground truth ª Eye tracker: ª A panel of observers; ª An appropriate protocol. Adapted from (Judd et al., 2009). 11 / 52
  13. 13. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Performance on still images (2/3) ª Discrete fixation map f i for the ith observer: f i (x) = M k=1 δ(x − xk) where M is the number of fixations and xk is the kth fixation. ª Continuous saliency map S: S(x) = 1 N N i=1 f i (x) ∗ Gσ(x) where N is the number of observers and Gσ is a 2D Gaussian function. More details in (Le Meur and Baccino, 2013). 12 / 52
  14. 14. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Performance on still images (3/3) Performance in terms of linear correlation, extracted from (Borji et al., 2012). ª more than 30 models. 13 / 52
  15. 15. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Conclusion (1/1) The picture is much clearer than 10 years ago! BUT... ª How far are we from human performance?∗ ª How to take advantage of top-down influences?∗ ª What is the best score?∗ ª What is the most representative dataset?∗ Important aspects of our visual system are clearly overlooked; Current models implicitly assume that eyes are equally likely to move in any direction; Systematic biases are not taken into account; The temporal dimension is not considered (static saliency map)... ∗ Open challenges of visual attention, tutorial CVPR’13, Borji et al. 14 / 52
  16. 16. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Visual Attention 2 Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations 15 / 52
  17. 17. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Presentation (1/1) Eye movements are composed of fixations and saccades. A sequence of fixations is called a visual scanpath. The fundamental assumption is that scanpaths can be described by a first-order Markov process, i.e. each eye fixation only depends on the previous one. ª The seminal work of (Ellis and Smith, 1985, Stark and Ellis, 1981) described a probabilistic approach where the eye movements are modelled as a first-order Markov process. 16 / 52
  18. 18. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Existing saccadic models (1/6) ª Saliency models and eye movement models are combined (Itti and Koch, 2000, Itti et al., 1998). • winner-take-all algorithm, • an attended area is inhibited for approximately 500-900 ms, • jumps from one salient location to the next in approximately 30-70 ms. the model is deterministic, given a set of of parameters, history of recent fixation locations..., the model does not reproduce the systematic biases of our oculomotor system. 17 / 52
  19. 19. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Existing saccadic models (2/6) ª (Brockmann and Geisel, 2000)’s model is related to L´evy flights which is a particular type of random walk with a step length that follows a heavy-tailed distribution. • The model is stochastic, • The model generates scanpaths similar in their nature to L´evy flights, i.e. possess a power law dependency in their magnitude distribution.. Saliency information is not used, Inhibition is not used. Top: Model scanpath generated by a finite variance random walk (Gaussian system); Bottom: Same as top, except here the scanpath is a L´evy flights (Cauchian system). 18 / 52
  20. 20. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Existing saccadic models (3/6) ª (Boccignone and Ferraro, 2004) extended Brockmann’s work, and modeled eye gaze shifts by using L´evy flights constrained by the saliency. • The model is based on a saliency map, • The jump has a higher probability to occur if the target site is strongly connected in terms of saliency. Orientation bias is not used, Inhibition is not used. 19 / 52
  21. 21. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Existing saccadic models (4/6) ª (Wang et al., 2011)’s model is based on foveal images. • Visual working memory. • Distribution of saccade amplitudes is taken into account to jump from one fixation to another. • Residual information are used to infer the next fixation location. Orientation bias is not used, Time course of IoR is not considered. 20 / 52
  22. 22. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Existing saccadic models (5/6) ª (Tavakoli et al., 2013) • The model is based on a Markovian process of order one. • Visited locations are kept in order to inhibit immediate return of attention to those locations. • Several parameters are learned from human eye tracking data, such as p(d) which represents the probability of a jump with length d. p() is a Gaussian mixture. Orientation bias is not used, Time course of IoR is not considered. 21 / 52
  23. 23. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Existing saccadic models (6/6) ª (Liu et al., 2013) • The model is based on a Markovian process of order one. • Saliency map. • L´evy flight to reproduce the distribution of saccade amplitudes. • Semantic content (transition matrix learned by using Hidden Markov Model). Orientation bias is not used, Time course of IoR is not considered. Assuming a Markov process and independence between the three factors: 22 / 52
  24. 24. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Systematic tendencies (1/7) Saccade amplitudes & Saccade orientations on natural scenes ª Short saccades around 1 to 3 degrees of visual angle; ª Short saccades are more frequent than long ones. 23 / 52
  25. 25. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Systematic tendencies (2/7) Saccade amplitudes & Saccade orientations on natural scenes ª Anisotropic shape; ª More horizontal saccades than vertical ones; ª Very few diagonal saccades. 23 / 52
  26. 26. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Systematic tendencies (3/7) Saccade amplitudes & Saccade orientations on natural scenes ª Joint distribution of saccade amplitudes and orientations; ª Strong horizontal bias and small saccade. 23 / 52
  27. 27. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Systematic tendencies (4/7) Horizontal and vertical cross sections of the probability distribution for horizontal saccades and vertical saccades ª Saccades in the same direction as the precedent one, i.e. with an angle of 0◦ , are more likely than saccades in the opposite direction; ª Upward vertical saccades are more likely than downward saccades (consistent with (Greene et al., 2014, Tatler and Vincent, 2009)). 24 / 52
  28. 28. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Systematic tendencies (5/7) Saccade amplitudes & Saccade orientations on natural scenes From top-left to bottom-right: Le Meur, Bruce, Kootstra and Judd’s fixation datasets. 25 / 52
  29. 29. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Systematic tendencies (6/7) Center bias on natural scenes ª Visual fixations are neither distributed evenly nor randomly. 26 / 52
  30. 30. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Systematic tendencies (7/7) Saccade amplitudes & Saccade orientations on webpages ª Joint distribution of saccade amplitudes and orientations; ª Strong horizontal bias in the rightward direction; ª F-shaped pattern. From (Shen and Zhao, 2014)’s eye fixation dataset. 27 / 52
  31. 31. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (1/10) So, what are the key ingredients for successfully designing a saccadic model? ª The model has to be stochastic: the subsequent fixation cannot be completely specified (given a set of data). ª The model has to generate plausible scanpaths that are similar to those generated by humans in similar conditions: distribution of saccade amplitudes and orientations, center bias... ª Inhibition of return has to be considered: time-course, spatial decay... ª Fixations should be mainly located on salient areas. 28 / 52
  32. 32. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (2/10) Let I : Ω ⊂ R2 → R3 an input image and xt a fixation point at time t. We consider the 2D discrete conditional probability p (x|xt−1, · · · , xt−T ) which is composed of three terms: p (x|xt−1, . . . , xt−T ) ∝ pBU (x)pB(d, φ)pM (x, t) (1) ª pBU : Ω → [0, 1] is the grayscale saliency map; ª pB(d, φ) represents the joint probability distribution of saccade amplitudes and orientations. d is the saccade amplitude between two fixation points xt and xt−1 (expressed in degree of visual angle), and φ is the angle (expressed in degree between these two points); ª pM (x, t) represents the memory state of the location x at time t. This time-dependent term simulates the inhibition of return. 29 / 52
  33. 33. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (3/10) Bottom-up saliency map ª pBU is the bottom-up saliency map. • Computed by GBVS model (Harel et al., 2006). According to (Borji et al., 2012)’s benchmark, this model is among the best ones and presents a good trade-off between quality and complexity. • pBU (x) is constant over time. (Tatler et al., 2005) indeed demonstrated that bottom-up influences do not vanish over time. 30 / 52
  34. 34. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (4/10) Joint probability distribution of saccade amplitudes and orientations ª pB(d, φ) represents the joint probability distribution of saccade amplitudes and orientations. • We define di and φi the distance and the angle between each pair of successive fixations respectively (several eye fixation datasets were used). pB(d, φ) = 1 n n i=1 Kh(d − di, φ − φi) (2) where, n is the total number of samples (n = 105727) and Kh is a two-dimensional Gaussian kernel. h = (hd, hφ) is the kernel bandwidth. Separate bandwidths were used for angle and distance components. The two bandwidth parameters are chosen optimally based on the linear diffusion method proposed by (Botev et al., 2010). 31 / 52
  35. 35. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (5/10) Joint probability distribution of saccade amplitudes and orientations ª pB(d, φ) represents the joint probability distribution of saccade amplitudes and orientations. 32 / 52
  36. 36. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (6/10) Memory effect and inhibition of return (IoR) ª pM (x, t) represents the memory effect and inhibition of return (IoR) of the location x at time t. • We assume that the IoR effect disappears after T = 8 fixations. Considering that a fixation duration lasts 300 ms on average, an attended location could be refixated after 2.4 seconds (Mannan et al., 1997, Samuel and Kat, 2003); • We also assume that the spatial IoR effect declines as a Gaussian function Φσi (d) with the Euclidean distance d from the attended location (Bennett and Pratt, 2001); • The temporal decline of the IoR effect is simulated by a simple linear model. 33 / 52
  37. 37. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (7/10) Memory effect and inhibition of return (IoR) ª pM (x, t) represents the memory effect and inhibition of return (IoR) of the location x at time t: pM (x, t) =    1, t = 0 pM (x, t − 1) − I(y|xt) + ξ k=1 R(y|xt−k) otherwise (3) where y ∈ Ω represent all the possible locations in the input image. . is used to clip the value to the range [0, 1]. ξ is the number of visual fixations to consider: ξ is equal to min(T, t − 1) with T the number of fixations needed for an area to recover its initial saliency. I and R represent the inhibition and the recovery functions, respectively: I(y|z) = Φσi ( y − z ) (4) R(y|z) = Φσi ( y − z ) T (5) where Φσi (d) is the 2D Gaussian representing the spatial effect of the IoR. 34 / 52
  38. 38. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (8/10) Memory effect and inhibition of return (IoR) Illustration of the temporal evolution of pM (x, t) for an image defined over the space Ω = [1...256] × [1...256] and for an unique fixation point having the spatial coordinate (128, 128). This fixation is represented by the red cross. We consider, in this example, that the number of fixation T to recover the original saliency is equal to 4. 35 / 52
  39. 39. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (9/10) Selecting the next fixation point ª Optimal next fixation point (Bayesian ideal searcher proposed by (Najemnik and Geisler, 2009)): x∗ t = arg max x∈Ω p (x|xt−1, · · · , xt−T ) (6) Problem: this approach does not reflect the stochastic behavior of our visual system and may fail to provide plausible scanpaths (Najemnik and Geisler, 2008). ª Rather than selecting the best candidate, we generate Nc = 5 random locations according to the 2D discrete conditional probability p (x|xt−1, · · · , xt−T ). The location with the highest saliency gain is chosen as the next fixation point x∗ t . 36 / 52
  40. 40. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Proposed model (10/10) Selecting the next fixation point Probability of saccade targeting in retinocentric space by considering that the previous fixation xt−1 is the centre of the plot. The red crosses are the Nc candidates that have been randomly drawn according to the shown probability. From left to right: Nc = {5, 10, 15}. When Nc = 1, the randomness is maximal, whereas, when Nc increases, the stochastic behavior of the proposed method is getting less important. 37 / 52
  41. 41. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Results (1/8) The relevance of the proposed approach is assessed with regard to the plausibility, the spatial precision of the simulated scanpath and ability to predict saliency areas. ª Do the generated scanpaths present the same oculomotor biases as human scanpaths? ª What is the similarity degree between predicted and human scanpaths? ª Could the predicted scanpaths be used to form relevant saliency maps? 38 / 52
  42. 42. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Results (2/8) Are the simulated scanpaths plausible? ª Protocol: • We assume that the simulated scanpaths are obtained in a context of purely free viewing ⇒ top-down effects are not taken into account. • For each image in Bruce’s and Judd’s datasets, we generate 20 scanpaths, each composed of 10 fixations ⇒ 224600 generated visual fixations. • We assume that the visual fixation duration is constant. So, considering an average fixation duration of 300ms, 10 fixations represent a viewing duration of 3s. • Bottom-up saliency maps are computed by GBVS model (Harel et al., 2006). 39 / 52
  43. 43. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Results (3/8) Are the simulated scanpaths plausible? Top row: Bruce’s dataset. Bottom row: Judd’s dataset. 40 / 52
  44. 44. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Results (4/8) Are the simulated scanpaths plausible? Impact of the oculomotor constraints (spatial and orientation), WTA+IoR 41 / 52
  45. 45. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Results (5/8) What is the similarity degree between predicted and human scanpaths? There are a few methods for comparing scanpaths: string-edit (Privitera and Stark, 2000), Dynamic Time Warp algorithm (DTW) (Gupta et al., 1996, Jarodzka et al., 2010). More details in (Le Meur and Baccino, 2013). ª We use DTW’s method. ª For a given image, 20 scanpaths each composed of 10 fixations are generated. The final distance between the predicted scanpath and human scanpaths is equal to the average of the 20 DTW scores. The closer to 0 the value DTW , the more similar the scanpaths. 42 / 52
  46. 46. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Results (6/8) What is the similarity degree between predicted and human scanpaths? ª Five models are evaluated. ª The error bars correspond to the SEM (Standard Error of the Mean). ª DTW = 0 when there is a perfect similarity between scanpaths. ª There is a significant difference between the performances of the proposed model and (Boccignone and Ferraro, 2004)’s model (paired t-test, p << 0.01). ª As expected, the lowest performances are obtained by the random model. 43 / 52
  47. 47. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Results (7/8) Scanpath-based saliency map ª We compute, for each image, 20 scanpaths, each composed of 10 fixations. ª For each image, we created a saliency map by convolving a Gaussian function over the fixation locations. (a) original image; (b) human saliency map; (c) GBVS saliency map; (d) GBVS-SM saliency maps computed from the simulated scanpaths. 44 / 52
  48. 48. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Results (8/8) Scanpath-based saliency map 45 / 52
  49. 49. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Saliency map, randomness and webpages (1/3) ª Influence of the saliency map: Top2-SM: we aggregated the saliency maps of GBVS and RARE2012 models through a simple average. (Le Meur and Liu, 2014) demonstrated that a simple average of the top 2 saliency maps, computed by GBVS and RARE2012 models, significantly outperforms the best saliency models. 46 / 52
  50. 50. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Saliency map, randomness and webpages (2/3) ª Randomness: The maximal randomness is obtained when Nc = 1. 47 / 52
  51. 51. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Saliency map, randomness and webpages (3/3) ª Is the model valid for webpages? By using the eye fixation dataset of (Shen and Zhao, 2014), we show that oculomotor biases on webpages differ from those observed on natural scenes: (a) Distribution of saccade amplitudes; (b) distribution of saccade orientations; (c) joint distribution of saccade orientations and amplitudes. ª There is a strong tendency for making horizontal small saccades in the rightward direction. This tendency is known as the F-bias (Buscher et al., 2009). 48 / 52
  52. 52. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Limitations of the proposed model Still far from the reality... ª We do not predict the fixation durations. Some models could be used for this purpose (Nuthmann et al., 2010, Trukenbrod and Engbert, 2014). ª Second-order effect. We assume that the memory effect occurs only in the fixation location. However, are saccades independent events? No, see (Tatler and Vincent, 2008). ª High-level aspects such as the scene context are not included in our model. ª Should we recompute the saliency map after every fixations? Probably yes... ª Randomness (Nc) should be adapted to the input image. By default, Nc = 5. ª Is the time course of IoR relevant? Is the recovery linear? 49 / 52
  53. 53. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Conclusion 3 Conclusion & Perspective 50 / 52
  54. 54. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Conclusion & Perspective Improvements ª Dealing with our model’s limitations. ª A new metric to evaluate the similarity between scanpaths. ª How to integrate top-down effects? ª How to attribute greater meaning to fixations (Bruce, 2014, Follet et al., 2011, Unema et al., 2005) Applications ª How can we use predicted scanpath in computer vision applications? ª Is there an application to computational medicine? 51 / 52
  55. 55. Saccadic model O. Le Meur Visual attention Presentation Computational models Performance Conclusion Saccadic model of eye movement Presentation Existing saccadic models Systematic tendencies Proposed model Results A focus on... Limitations Conclusion & Perspective Thanks for your attention 52 / 52
  56. 56. Saccadic model O. Le Meur References References P. J. Bennett and J. Pratt. The spatial distribution of inhibition of return:. Psychological Science, 12:76–80, 2001. G. Boccignone and M. Ferraro. Modelling gaze shift as a constrained random walk. Physica A: Statistical Mechanics and its Applications, 331:207 – 218, 2004. ISSN 0378-4371. doi: http://dx.doi.org/10.1016/j.physa.2003.09.011. A. Borji and L. Itti. State-of-the-art in visual attention modeling. IEEE Trans. on Pattern Analysis and Machine Intelligence, 35: 185–207, 2013. A. Borji, D. N. Sihite, and L. Itti. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing, 22(1):55–69, 2012. Z.I. Botev, J.F. Grotowski, and D. P. Kroese. Kernel density estimation via diffusion. The annals of Statistics, 38(8):2916–2957, 2010. D. Brockmann and T. Geisel. The ecology of gaze shifts. Neurocomputing, 32(1):643–650, 2000. Neil D. B. Bruce. Towards fine-grained fixation analysis: Distilling out context dependence. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA ’14, pages 99–102, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-2751-0. doi: 10.1145/2578153.2578167. URL http://doi.acm.org/10.1145/2578153.2578167. Georg Buscher, Ed Cutrell, and Meredith Ringel Morris. What do you see when you’re surfing? using eye tracking to predict salient regions of web pages. In Proceedings of CHI 2009. Association for Computing Machinery, Inc., April 2009. URL http://research.microsoft.com/apps/pubs/default.aspx?id=76826. S. R. Ellis and J. D. Smith. Patterns of statistical dependency in visual scanning, chapter Eye Movements and Human Information Processing, pages 221–238. Elsevier Science Publishers BV, (eds) Amsterdam, North Holland Press, 1985. B. Follet, O. Le Meur, and T. Baccino. New insights into ambient and focal visual fixations using an automatic classification algorithm. i-Perception, 2(6):592–610, 2011. Harold H. Greene, James M. Brown, and Barry Dauphin. When do you look where you look? a visual field asymmetry. Vision Research, 102(0):33 – 40, 2014. ISSN 0042-6989. doi: http://dx.doi.org/10.1016/j.visres.2014.07.012. URL http://www.sciencedirect.com/science/article/pii/S0042698914001710. L. Gupta, D. L. Molfese, R. Tammana, and P. G. Simos. Nonlinear alignment and averaging for estimating the evoked potential. IEEE Transactions on Biomedical Engineering, 43(4):348–356, 1996. J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In Proceedings of Neural Information Processing Systems (NIPS), 2006. 52 / 52
  57. 57. Saccadic model O. Le Meur References L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10-12): 1489–1506, May 2000. L. Itti, C. Koch, and E. Niebur. A model for saliency-based visual attention for rapid scene analysis. IEEE Trans. on PAMI, 20: 1254–1259, 1998. H. Jarodzka, K. Holmqvist, and K. Nystrom. A vector-based, multidimensional scanpath similarity measure. In ETRA, pages 211–218, 2010. T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where people look. In ICCV, 2009. C. Koch and S. Ullman. Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology, 4: 219–227, 1985. O. Le Meur and T. Baccino. Methods for comparing scanpaths and saliency maps: strengths and weaknesses. Behavior Research Method, 45(1):251–266, 2013. O. Le Meur and Z. Liu. Saliency aggregation: Does unity make strength? In ACCV, 2014. H. Liu, D. Xu, Q. Huang, W. Li, M. Xu, and S. Lin. Semantically-based human scanpath estimation with hmms. In ICCV, 2013. S.K. Mannan, K.H. Ruddock, and D.S. Wooding. Fixation patterns made during brief examination of two-dimensional images. Perception, 26(8):1059–1072, 1997. J. Najemnik and W.S. Geisler. Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3): 1–14, 2008. J. Najemnik and W.S. Geisler. Simple summation rule for optimal fixation selection in visual search. Vision Research, 42: 1286–1294, 2009. Antje Nuthmann, Tim J. Smith, Ralf Engbert, and John M. Henderson. CRISP: A Computational Model of Fixation Durations in Scene Viewing. Psychological Review, 117(2):382–405, April 2010. URL http://www.eric.ed.gov/ERICWebPortal/detail?accno=EJ884784. M. I. Posner. Orienting of attention. Quarterly Journal of Experimental Psychology, 32:3–25, 1980. C. M. Privitera and L. W. Stark. Algorithms for defining visual regions-of-interest: Comparison with eye fixations. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 22:970–982, 2000. A.G. Samuel and D. Kat. Inhibition of return: a graphical meta-analysis of its time course and an empirical test of its temporal and spatial properties. Psychonomic Bulletin & Review, 10(4):897–906, 2003. Chengyao Shen and Qi Zhao. Webpage saliency. In ECCV. IEEE, 2014. 52 / 52
  58. 58. Saccadic model O. Le Meur References L. W. Stark and S. R. Ellis. Scanpaths revisited: cognitive models direct active looking, pages 193–226. Lawrence Erlbaum Associates, Hillsdale, NJ, 1981. B.W. Tatler and B. T. Vincent. The prominence of behavioural biases in eye guidance. Visual Cognition, Special Issue: Eye Guidance in Natural Scenes, 17(6-7):1029–1059, 2009. B.W. Tatler and B.T. Vincent. Systematic tendencies in scene viewing. Journal of Eye Movement Research, 2:1–18, 2008. B.W. Tatler, R. J. Baddeley, and I.D. Gilchrist. Visual correlates of fixation selection: effects of scale and time. Vision Research, 45:643–659, 2005. H.R. Tavakoli, E. Rahtu, and J. Heikkika. Stochastic bottom-up fixation prediction and saccade generation. Image and Vision Computing, 31:686–693, 2013. HansA. Trukenbrod and Ralf Engbert. Icat: a computational model for the adaptive control of fixation durations. Psychonomic Bulletin & Review, 21(4):907–934, 2014. ISSN 1069-9384. doi: 10.3758/s13423-013-0575-0. URL http://dx.doi.org/10.3758/s13423-013-0575-0. P.J.A. Unema, S. Pannasch, M. Joos, and B.M. Velichkovsky. Time course of information processing during scene perception: the relationship between saccade amplitude and fixation duration. Visual Cognition, 12(3):473–494, 2005. W. Wang, C. Chen, Y. Wang, T. Jiang, F. Fang, and Y. Yao. Simulating human saccadic scanpaths on natural images. In CVPR, 2011. 52 / 52

×