Human Performance in Time-Shared Verbal and Tracking Tasks.
NAVAL AEROSPACE MEDICAL RESEARCH LAB PENSACOLA FL
Pagination or Media Count:
Significant progress has been made in the development of automated speech understanding systems for application to naval aviation systems. One advantage that is anticipated for speech over conventional man-machine interfaces is that speech could function as an independent channel for the control of systems. The experiment reported in this paper represents an initial effort to investigate the assumption that an automatic speech understanding system will provide a parallel channel for the performance of an information processing task currently with a visualmanual control task. The experiment required human subjects to time-share a digital information processing task and a continuous compensatory tracking task. Independent variables in the design were task loading single- vs. dual-task conditions, stimulus presentation modality for the digital task auditory vs. visual, and response modality for the digital task voice vs. keyboard. Data from 16 subjects were analyzed. The results indicated that the combination of viusal stimulus modality and voice response provided optimum joint-task performance. No combination of stimulus and response modalities resulted in equivalent single- and dual-task performance. Future experiments should be designed to investigate the joint-task performance space for tasks that are more representative of the information processing performance requirements of specific systems. However, the interpretability of the results of such research will depend upon the solution of methodological problems, such as how to control or account for subjects speed-accuracy tradeoff strategies and the priorities they place upon the concurrent tasks. Author
- Voice Communications