DID YOU KNOW? DTIC has over 3.5 million final reports on DoD funded research, development, test, and evaluation activities available to our registered users. Click HERE
to register or log in.
Using Virtual Active Vision Tools to Improve Autonomous Driving Tasks.
CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICS INST
Pagination or Media Count:
ALVINN is a simulated neural network for road following. In its most basic form, it is trained to take a subsampled, preprocessed video image as input, and produce a steering wheel position as output. ALVINN has demonstrated robust performance in a wide variety of situations, but is limited due to its lack of geometric models. Grafting geometric reasoning onto a non-geometric base would be difficult and would create a system with diluted capabilities. A much better approach is to leave the basic neural network intact, preserving its real-time performance and generalization capabilities, and to apply geometric transformations to the input image and the output steering vector. These transformations form a new set of tools and techniques called Virtual Active Vision. The thesis for this work is Virtual Active Vision tools will improve the capabilities of neural network based autonomous driving systems.
APPROVED FOR PUBLIC RELEASE