Vision-guided heterogeneous mobile robot docking Conference Paper uri icon

abstract

  • Teams of heterogeneous mobile robots are a key aspect of future unmanned systems for operations in complex and dynamic urban environments, such as that envisioned by DARPA's Tactical Mobile Robotics program. One example of an interaction among such team members is the docking of small robot of limited sensory and processing capability with a larger, more capable robot. Applications for such docking include the transfer of power, data, and material, as well as physically combined maneuver or manipulation. A two-robot system is considered in this paper. The smaller `throwable' robot contains a video camera capable of imaging the larger `packable' robot and transmitting the imagery. The packable robot can both sense the throwable robot through an onboard camera, as well as sense itself through the throwable robot's transmitted video, and is capable of processing imagery from either source. This paper describes recent results in the development of control and sensing strategies for automatic mid-range docking of these two robots. Decisions addressed include the selection of which robot's image sensor to use and which robot to maneuver. Initial experimental results are presented for docking using sensor data from each robot.

name of conference

  • Sensor Fusion and Decentralized Control in Robotic Systems II

published proceedings

  • Proceedings of SPIE

author list (cited authors)

  • Spofford, J. R., Blitch, J., Klarquist, W. N., & Murphy, R. R.

citation count

  • 7

complete list of authors

  • Spofford, John R||Blitch, John||Klarquist, William N||Murphy, Robin R

editor list (cited editors)

  • McKee, G. T., & Schenker, P. S.

publication date

  • August 1999