Beyond bottom-up: Incorporating task-dependent influences into a computational model of spatial attention. Peters, R. J. & Itti, L. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, MN, Jun, 2007.
abstract   bibtex   
A critical function in both machine vision and biological vision systems is attentional selection of scene regions worthy of further analysis by higher-level processes such as object recognition. Here we present the first model of spatial attention that (1) can be applied to arbitrary static and dynamic image sequences with interactive tasks and (2) combines a general computational implementation of both bottom-up (BU) saliency and dynamic top-down (TD) task relevance; the claimed novelty lies in the combination of these elements and in the fully computational nature of the model. The BU component computes a saliency map from 12 low-level multi-scale visual features. The TD component computes a low-level signature of the entire image, and learns to associate different classes of signatures with the different gaze patterns recorded from human subjects performing a task of interest. We measured the ability of this model to predict the eye movements of people playing contemporary video games. We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a combined BU*TD model performs significantly better than either individual component. Qualitatively, the combined model predicts some easy-to-describe but hard-to-compute aspects of attentional selection, such as shifting attention leftward when approaching a left turn along a racing track. Thus, our study demonstrates the advantages of integrating BU factors derived from a saliency map and TD factors learned from image and task contexts in predicting where humans look while performing complex visually-guided behavior.
@inproceedings{ Peters_Itti07cvpr,
  author = {R. J. Peters and L. Itti},
  title = {Beyond bottom-up: Incorporating task-dependent influences into a
computational model of spatial attention},
  abstract = {A critical function in both machine vision and biological
                  vision systems is attentional selection of scene
                  regions worthy of further analysis by higher-level
                  processes such as object recognition. Here we
                  present the first model of spatial attention that
                  (1) can be applied to arbitrary static and dynamic
                  image sequences with interactive tasks and (2)
                  combines a general computational implementation of
                  both bottom-up (BU) saliency and dynamic top-down
                  (TD) task relevance; the claimed novelty lies in the
                  combination of these elements and in the fully
                  computational nature of the model. The BU component
                  computes a saliency map from 12 low-level
                  multi-scale visual features. The TD component
                  computes a low-level signature of the entire image,
                  and learns to associate different classes of
                  signatures with the different gaze patterns recorded
                  from human subjects performing a task of
                  interest. We measured the ability of this model to
                  predict the eye movements of people playing
                  contemporary video games. We found that the TD model
                  alone predicts where humans look about twice as well
                  as does the BU model alone; in addition, a combined
                  BU*TD model performs significantly better than
                  either individual component. Qualitatively, the
                  combined model predicts some easy-to-describe but
                  hard-to-compute aspects of attentional selection,
                  such as shifting attention leftward when approaching
                  a left turn along a racing track. Thus, our study
                  demonstrates the advantages of integrating BU
                  factors derived from a saliency map and TD factors
                  learned from image and task contexts in predicting
                  where humans look while performing complex
                  visually-guided behavior.},
  booktitle = {Proc. IEEE Conference on Computer Vision and Pattern
Recognition (CVPR)},
  address = {Minneapolis, MN},
  month = {Jun},
  year = {2007},
  type = {bu ; cv ; td ; eye ; mod},
  file = {http://ilab.usc.edu/publications/doc/Peters_Itti07cvpr.pdf},
  if = {2007 acceptance rate: 28%},
  review = {full/conf}
}

Downloads: 0