Building Agent-Based Walking Models by Machine-Learning on Diverse Databases of Space-Time Trajectory Samples

Authored by Paul Torrens, Xun Li, William A. Griffin

Date Published: 2011-07

DOI: 10.1111/j.1467-9671.2011.01261.x

Sponsors: United States National Science Foundation (NSF)

Platforms: No platforms listed

Model Documentation: Other Narrative Flow charts Mathematical description

Model Code URLs: Model code not found

Abstract

We introduce a novel scheme for automatically deriving synthetic walking (locomotion) and movement (steering and avoidance) behavior in simulation from simple trajectory samples. We use a combination of observed and recorded real-world movement trajectory samples in conjunction with synthetic, agent-generated, movement as inputs to a machine-learning scheme. This scheme produces movement behavior for non-sampled scenarios in simulation, for applications that can differ widely from the original collection settings. It does this by benchmarking a simulated pedestrian's relative behavioral geography, local physical environment, and neighboring agent-pedestrians; using spatial analysis, spatial data access, classification, and clustering. The scheme then weights, trains, and tunes likely synthetic movement behavior, per-agent, per-location, per-time-step, and per-scenario. To prove its usefulness, we demonstrate the task of generating synthetic, non-sampled, agent-based pedestrian movement in simulated urban environments, where the scheme proves to be a useful substitute for traditional transition-driven methods for determining agent behavior. The potential broader applications of the scheme are numerous and include the design and delivery of location-based services, evaluation of architectures for mobile communications technologies, what-if experimentation in agent-based models with hypotheses that are informed or translated from data, and the construction of algorithms for extracting and annotating space-time paths in massive data-sets.
Tags