A Computational Model of Active Vision for Visual Search in Human-Computer Interaction
AIR FORCE RESEARCH LAB MESA AZ
Pagination or Media Count:
Human visual search plays an important role in many human-computer interaction HCI tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to complete an HCI task, but to understand the many human processes that interact in visual search, which will in turn inform the detailed design of better user interfaces. This article describes a detailed instantiation, in the form of a computational cognitive model, of a comprehensive theory of human visual processing known as active vision Findlay Gilchrist, 2003. The computational model is built using the EPIC Executive Process-Interactive Control cognitive architecture. Eye tracking data from three experiments inform the development and validation of the model. The modeling asks--and at least partially answers--the four questions of active vision 1 What can be perceived in a fixation 2 When do the eyes move 3 Where do the eyes move 4 What information is integrated between eye movements Answers include 1 Items nearer the point of gaze are more likely to be perceived, and the visual features of objects are sometimes misidentified. 2 The eyes move after the fixated visual stimulus has been processed i.e., has entered working memory. 3 The eyes tend to go to nearby objects. 4 Only the coarse spatial information of what has been fixated is likely maintained between fixations. The model developed to answer these questions has both scientific and practical value in that the model gives HCI researchers and practitioners a better understanding of how people visually interact with computers, and provides a theoretical foundation for predictive analysis tools that can predict aspects of that interaction.
- Human Factors Engineering and Man Machine Systems