Active Sensing Representations for Navigation and Visual Scene Analysis
Technical Report,01 Jul 2015,30 Jun 2018
CALIFORNIA UNIV LOS ANGELES LOS ANGELES United States
Pagination or Media Count:
This project, kicked off in September of 2015, aimed at developing analytical and computational tools to infer optimal representations for decision and control actions based on visual data. Specifically, corresponding to classes of tasks, different representations can be designed. For localization tasks, EO imaging and inertial sensors can be used to develop a representation that is minimal-sufficient an attributed point cloud and invariant to changes of illumination and partial occlusion. The result is a posterior estimate of the sensor trajectory in SE3 given all measurements up to the current time, marginalized with respect to all nuisance variability. Semantic understanding of the scene requires more sophisticated representations than point cloud.