Air Force Institute ofTechnology Wright Patterson Air Force Base United States
Robots require high-quality maps-internal representations of their operating workspace-to localize, path plan, and perceive their environment. Until recently, these maps were restricted to sparse, 2D representations due to computational, memory, and sensor limitations. With the widespread adoption of high-quality sensors and graphics processors for parallel processing, these restrictions no longer apply dense 3D maps are feasible to compute in real time i.e., at the input sensors frame rate. This thesis presents the theory and system to create large-scale dense 3D maps i.e., reconstruct continuous surface models using only sensors found on modern autonomous automobiles 2D laser, 3D laser, and cameras. We demonstrate our system fusing data from both laser and camera sensors to reconstruct 7.3 km of urban environments. We evaluate the quantitative performance of our proposed method through the use of synthetic and real-world datasets. With only stereo camera inputs, our regularizer reduces the 3D reconstruction metric error between 27 to 36with a final median accuracy ranging between 4 cm to 8 cm.