Best Viewpoints for External Robots or Sensors Assisting Other Robots
- View All
Bomb squads, SWAT teams, nuclear workers, and first responders typically use two robots to complete a single task: The primary robot performs the task but the operators use a second robot to get a better view of what the primary robot is doing. It is hard for the two sets of operators to coordinate the robots. But the biggest problem is not manual coordination, it is that operators often do not place the secondary robot where it provides the best viewpoint for increasing performance. While studies starting in 2001 have consistently shown that operators do not pick optimal viewpoints, there has been no formal theory of what is good for different types of tasks. This project will create the formal theory using perceptual psychology and apply it to an Endeavor Packbot 510 mobile robot that has been modified to carry a tethered Fotokite unmanned aerial vehicle (UAV). Using the formal theory, the Packbot will be able to perform a remote task while the UAV autonomously moves and maintains the optimal viewpoint given the clutter in the environment. This will reduce the manpower, inefficiency, and time it takes for robots to accomplish tasks in all domains, from public safety to manufacturing. The theory can also be applied to placing external sensors or cameras where they will be most helpful in controlling telecommuting robots, construction robots, or space robotics.This project will provide a fundamental, principled understanding of the perception needed to reduce the cognitive demands on robot operators, thereby increasing productivity while reducing costly errors. The model will allow ground robots, aerial robots, or external cameras to autonomously position themselves to give the best viewpoint for the current task. The project''s approach is to use cognitive science concept of affordances, where the potential for an action can be directly perceived without knowing intent or models, and thus is universal to all robots and tasks. The project will learn from expert robot operators the value of viewpoints for four different affordances (passability, reachability, manipulability, traversability) as they use a computer simulator developed under previous NSF funding to perform tasks. The project will then use machine learning to cluster the performance into a set of equivalent manifolds representing the relative value of viewpoints in that region for an affordance. These rankings will be used by a risk-based planner, also developed under previous NSF funding, to move the external robot to the best view for the operator, given the risk of the path to the view. The resulting theory will be implemented on a Packbot ground robot (primary) and carrying a tethered Fotokite (secondary) aerial assistant for two tasks: a door opening and traversal test capturing the most common bottleneck for any indoor navigation application and a sensor insertion task duplicating an especially difficult mission at Fukushima. The project will enable robots to be more useful during disasters and public safety incidents, accelerate the safe decommissioning of the Fukushima Daiichi facility, and aid with NASA and NIH missions for assistive robots.This award reflects NSF''s statutory mission and has been deemed worthy of support through evaluation using the Foundation''s intellectual merit and broader impacts review criteria.