Vision-based docking under variable lighting conditions Conference Paper uri icon

abstract

  • This paper describes our progress in near-range (within 0 to 2 meters) ego-centric docking using vision under variable lighting conditions (indoors, outdoor, dusk). The docking behavior is fully autonomous and reactive, where the robot directly responds to the ratio of the number of pixels of two colored fiducials without constructing an explicit model of the landmark. This is similar to visual homing in insects and has a low computational complexity of O(n2) and a fast update rate. In order to accurately segment the colored fiducials under unconstrained lighting conditions, the spherical coordinate transform (SCT) color space is used, rather than RGB or HSV, in conjunction with an adaptive segmentation algorithm. Experiments with a `daughter' robot docking with a `mother' robot were collected. Results showed that 1) vision-based docking is faster than teleoperation yet equivalent in performance and 2) adaptive segmentation is more robust under challenging lighting conditions, including outdoors.

name of conference

  • Unmanned Ground Vehicle Technology II

published proceedings

  • Proceedings of SPIE

author list (cited authors)

  • Murphy, R. R., Hyams, J. A., Minten, B. W., & Micire, M.

citation count

  • 0

complete list of authors

  • Murphy, Robin R||Hyams, Jeffrey A||Minten, Brian W||Micire, Mark

editor list (cited editors)

  • Gerhart, G. R., Gunderson, R. W., & Shoemaker, C. M.

publication date

  • July 2000