On the Analysis of the Depth Error on the Road Plane for Monocular Vision-Based Robot Navigation Conference Paper uri icon

abstract

  • A mobile robot equipped with a single camera can take images at different locations to obtain the 3D information of the environment for navigation. The depth information perceived by the robot is critical for obstacle avoidance. Given a calibrated camera, the accuracy of depth computation largely depends on locations where images have been taken. For any given image pair, the depth error in regions close to the camera baseline can be excessively large or even infinite due to the degeneracy introduced by the triangulation in depth computation. Unfortunately, this region often overlaps with the robot's moving direction, which could lead to collisions. To deal with the issue, we analyze depth computation and propose a predictive depth error model as a function of motion parameters. We name the region where the depth error is above a given threshold as an untrusted area. Note that the robot needs to know how its motion affect depth error distribution beforehand, we propose a closed-form model predicting how the untrusted area is distributed on the road plane for given robot/camera positions. The analytical results have been successfully verified in the experiments using a mobile robot. 2009 Springer-Verlag.

name of conference

  • Algorithmic Foundation of Robotics VIII, Selected Contributions of the Eight International Workshop on the Algorithmic Foundations of Robotics, WAFR 2008, Guanajuato, Mxico, December 7-9, 2008

published proceedings

  • ALGORITHMIC FOUNDATIONS OF ROBOTICS VIII

author list (cited authors)

  • Song, D., Lee, H., & Yi, J.

citation count

  • 4

complete list of authors

  • Song, Dezhen||Lee, Hyunnam||Yi, Jingang

publication date

  • March 2010