Agent-Based Gesture Tracking Academic Article uri icon

abstract

  • We describe an agent-based approach to the visual tracking of human hands and head that represents a very useful "middle ground" between the simple model-free tracking and the highly constrained model-based solutions. It combines the simplicity, speed, and flexibility of tracking without using explicit shape models with the ability to utilize domain knowledge and to apply various constraints characteristic of more elaborate model-based tracking approaches. One of the key contributions of our system, called AgenTrac, is that it unifies the power of data-fusion (cue integration) methodologies with a well-organized extended path-coherence-resolution approach designed to handle crossing trajectories of multiple objects. Both approaches are combined in an easily configurable framework. We are not aware of any path-coherence or data-fusion solution in the computer vision literature that equals the breadth, generality, and flexibility of our approach. The AgenTrac system is not limited to tracking only human motion; in fact, one of its main strengths is that it can be easily reconfigured to track many types of objects in video sequences. The multiagent paradigm simplifies the application of basic domain-specific constraints and makes the entire system flexible. The knowledge necessary for effective tracking can be easily encoded in agent hierarchies and agent interactions. © 2005 IEEE.

author list (cited authors)

  • Bryll, R., Rose, R. T., & Quek, F.

citation count

  • 8

publication date

  • November 2005