Upper Body Tracking and 3D Gesture Reconstruction Using Agent-Based Architecture
Authored by Chao Peng, Bing Fang, Francis Quek, Yong Cao, Seung In Park, Liguang Xie
Date Published: 2015
DOI: 10.1142/s0219467815500163
Sponsors:
United States National Science Foundation (NSF)
Platforms:
No platforms listed
Model Documentation:
Other Narrative
Model Code URLs:
Model code not found
Abstract
In this paper, we present an upper human body tracking system with
agent-based architecture. Our agent-based approach departs from
process-centric model where the agents are bound to specific processes, and introduces a novel model by which agents are bound to the objects or
sub-objects being recognized or tracked. To demonstrate the
effectiveness of our system, we use stereo video streams, which are
captured by calibrated stereo cameras, as inputs and synthesize human
animations which are represented by 3D skeletal motion data. Different
from our previous researches, the new system does not require a
restricted capture environment with special lighting condition and
projected patterns and subjects can wear daily clothes (we do NOT use
any markers). With the success from the previous researches, our
pre-designed agents are autonomous, self-aware entities that are capable
of communicating with other agents to perform tracking within agent
coalitions. Each agent with high-level abstracted knowledge seeks
`evidence' for its existence from both low-level features (e.g. motion
vector fields, color blobs) as well as from its peers (other agents
representing body-parts with which it is compatible). The power of the
agent-based approach is the flexibility by which domain information may
be encoded within each agent to produce an overall tracking solution.
Tags
multiagent systems
Model
Human motion