Visual Programming for Mobile Robot Navigation Using High-level Landmarks
Conference Paper
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
2016 IEEE. We propose a visual programming system that allows users to specify navigation tasks for mobile robots using high-level landmarks in a virtual reality (VR) environment constructed from the output of visual simultaneous localization and mapping (vSLAM). The VR environment provides a Google Street View-like interface for users to familiarize themselves with the robot's working environment, specify high-level landmarks, and determine task-level motion commands related to each landmark. Our system builds a roadmap by using the pose graph from the vSLAM outputs. Based on the roadmap, the high-level landmarks, and task-level motion commands, our system generates an output path for the robot to accomplish the navigation task. We present data structures, architecture, interface, and algorithms for our system and show that, given ns search-type motion commands, our system generates a path in O(ns(nr lognr+mr)) time, where nr and mr are the number of roadmap nodes and edges, respectively. We have implemented our system and tested it on real world data.
name of conference
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)