Learning behavior patterns from video for agent-based crowd modeling and simulation
Authored by Wentong Cai, Jinghui Zhong, Linbo Luo, Mingbi Zhao
Date Published: 2016
DOI: 10.1007/s10458-016-9334-8
Sponsors:
Chinese National Natural Science Foundation
Platforms:
Java
Model Documentation:
Other Narrative
Pseudocode
Mathematical description
Model Code URLs:
https://www.dropbox.com/sh/bwsuzjt4u0h306w/AADmfWV68Qwkz1GufXrrBdEBa?dl=0
Abstract
This paper proposes a novel data-driven modeling framework to construct
agent-based crowd model based on real-world video data. The constructed
crowd model can generate crowd behaviors that match those observed in
the video and can be used to predict trajectories of pedestrians in the
same scenario. In the proposed framework, a dual-layer architecture is
proposed to model crowd behaviors. The bottom layer models the
microscopic collision avoidance behaviors, while the top layer models
the macroscopic crowd behaviors such as the goal selection patterns and
the path navigation patterns. An automatic learning algorithm is
proposed to learn behavior patterns from video data. The learned
behavior patterns are then integrated into the dual-layer architecture
to generate realistic crowd behaviors. To validate its effectiveness, the proposed framework is applied to two different real world scenarios.
The simulation results demonstrate that the proposed framework can
generate crowd behaviors similar to those observed in the videos in
terms of crowd density distribution. In addition, the proposed framework
can also offer promising performance on predicting the trajectories of
pedestrians.
Tags
calibration