Selective Perception for Robot Driving
WRIGHT RESEARCH AND DEVELOPMENT CENTER WRIGHT-PATTERSON AFB OH ELECTRO-OPTICS BRANCH
Pagination or Media Count:
Robots performing complex tasks in rich environments need very good perception modules in order to understand their situation and choose the best action. Robot planning systems have typically assumed that perception was so good that it could refresh the entire world model whenever the planning system needed it, or whenever anything in the world changed. Unfortunately, this assumption is completely unrealistic in many real-world domains because perception is far too difficult. Robots in these domains cannot use the traditional planner paradigm, but instead need a new system design that integrates reasoning with perception. In this thesis I describe how reasoning can be integrated with perception, how task knowledge can be used to select perceptual targets, and how this selection dramatically reduces the computational cost of perception. The domain addressed in this thesis is driving in traffic. I have developed a microscopic traffic simulator called PHAROS that defines the street environment for this research. PHAROS contains detailed representations of streets, markings, signs, signals, and cars. It can simulate perception and implement commands for a vehicle controlled by a separate program. I have also developed a computational model of driving called Ulysses that defines the driving task. The model describes how various traffic objects in the world determine what actions that a robot must take.