Accession Number:



Robotic Navigation Emulating Human Performance

Descriptive Note:

Final rept. Mar 2009-Dec 2011

Corporate Author:


Personal Author(s):

Report Date:


Pagination or Media Count:



We formulated a set of computational tools models that allow a robot to see a natural 3D scene and to understand it in the sense that it can recover the 3D shapes, sizes and locations of the objects in the scene as well as the free spaces among them. The Figure-Ground Organization and 3D shape recovery tools built into our robot permits it to perform both of these complicated tasks on its own. In other words, all of the major steps required for 3D shape and scene recovery have been accomplished and they can be performed autonomously by a robot at this time. The remaining steps are designed to enhance its performance, bringing it in line with the performance of human beings. Once they are accomplished, our robot will not only be able to act autonomously it will also be able to navigate within natural scenes as well as a human being can navigate under similar conditions. There is even good reason to believe that the FGO and 3D shape-recovery tools that you have seen work so well for our robot are actually rather similar to those used by human beings performing similar tasks. We tested these tools in human psychophysical experiments and showed that the human and the models performances were very similar. Even if we ignore this psychophysical support, the mere fact that the robot can actually see a 3D scene veridically and plan its actions effectively within it, provides evidence for our belief that the robots tools are at least biologically-plausible even if they ultimately prove to be different from those actually used in the human visual system.

Subject Categories:

  • Cybernetics

Distribution Statement: