Toward Multimodal Human-Robot Cooperation and Collaboration
NAVAL RESEARCH LAB WASHINGTON DC
Pagination or Media Count:
Our multimodal interface integrates speech recognition, natural language understanding, spatial reasoning and human cognitive models for completing specific tasks and for perspective-taking in locative oriented tasks. With natural language and gestures, we believe human-robot interaction and communication is facilitated. Instead of concentrating on the various modalities of the interface, users can concentrate on the task at hand. Likewise, by incorporating human cognitive models for handling spatial information and perspective-taking, as well as for specific task completion, a better match with the expectations that humans acquire from their human-human interactions should be obtained, further facilitating cooperation and collaboration in human-robot interactions.
- Human Factors Engineering and Man Machine Systems