Co-evolution in predator prey through reinforcement learning

Authored by Megan M Olsen, Rachel Fraczkowski

Date Published: 2015

DOI: 10.1016/j.jocs.2015.04.011

Sponsors: Clare Boothe Luce Program

Platforms: Java MASON

Model Documentation: Other Narrative Flow charts Mathematical description

Model Code URLs: Model code not found

Abstract

In general species such as mammals must learn from their environment to survive. Biologists theorize that species evolved over time by ancestors learning the best traits, which allowed them to propagate more than their less effective counterparts. In many instances learning occurs in a competitive environment, where a species evolves alongside its food source and/or its predator. We propose an agent-based model of predators and prey with co-evolution through linear value function Q-learning, to allow predators and prey to learn during their lifetime and pass that information to their offspring. Each agent learns the importance of world features via rewards they receive after each action. We are unaware of work that studies co-evolution of predator and prey through simulation such that each entity learns to survive within its world, and passes that information on to its progeny, without running multiple training runs. We show that this learning results in a more successful species for both predator and prey, and that variations on the reward function do not have a significant impact when both species are learning. However, in the case where only a single species is learning, the reward function may impact the results, although overall improvements to the system are still found. We believe that our approach will allow computational scientists to simulate these environments more accurately. (C) 2015 Elsevier B.V. All rights reserved.
Tags
Simulation