Visual Navigation Using Heterogeneous Landmarks and Unsupervised Geometric Constraints
Academic Article
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
2015 IEEE. We present a heterogeneous landmark-based visual navigation approach for a monocular mobile robot. We utilize heterogeneous visual features, such as points, line segments, lines, planes, and vanishing points, and their inner geometric constraints managed by a novel multilayer feature graph (MFG). Our method extends the local bundle adjustment-based visual simultaneous localization and mapping (SLAM) framework by explicitly exploiting the heterogeneous features and their inner geometric relationships in an unsupervised manner. As the result, our heterogeneous landmark-based visual navigation algorithm takes a video stream as input, initializes and iteratively updates MFG based on extracted key frames, and refines robot localization and MFG landmarks through the process. We present pseudocode for the algorithm and analyze its complexity. We have evaluated our method and compared it with state-of-the-art point landmark-based visual SLAM methods using multiple indoor and outdoor datasets. In particular, on the KITTI dataset, our method reduces the translational error by 52.5% under urban sequences where rectilinear structures dominate the scene.