Error Aware Monocular Visual Odometry Using Vertical Line Pairs for Small Robots in Urban Areas
Conference Paper
-
- Overview
-
- Identity
-
- Additional Document Info
-
- View All
-
Overview
abstract
-
We report a new error-aware monocular visual odometry method that only uses vertical lines, such as vertical edges of buildings and poles in urban areas as landmarks. Since vertical lines are easy to extract, insensitive to lighting conditions/shadows, and sensitive to robot movements on the ground plane, they are robust features if compared with regular point features or line features. We derive a recursive visual odometry method based on the vertical line pairs. We analyze how errors are propagated and introduced in the continuous odometry process by deriving the closed form representation of covariance matrix. We formulate the minimum variance ego-motion estimation problem and present a method that outputs weights for different vertical line pairs. The resulting visual odometry method is tested in physical experiments and compared with two existing methods that are based on point features and line features, respectively. The experiment results show that our method outperforms its two counterparts in robustness, accuracy, and speed. The relative errors of our method are less than 2% in experiments. Copyright © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
name of conference
-
Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010, Atlanta, Georgia, USA, July 11-15, 2010
published proceedings
-
PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10)
author list (cited authors)
-
Zhang, J. i., Song, D., & AAAI
complete list of authors
publication date
Identity
International Standard Book Number (ISBN) 13
Additional Document Info
start page
end page
volume