Strategies for simulating pedestrian navigation with multiple reinforcement learning agents. Martinez-Gil, F., Lozano, M., & Fernández, F. Autonomous Agents and Multi-Agent Systems, 29(1):98–130, jan, 2015.
Strategies for simulating pedestrian navigation with multiple reinforcement learning agents [link]Paper  doi  abstract   bibtex   
© 2014, The Author(s).In this paper, a new multi-agent reinforcement learning approach is introduced for the simulation of pedestrian groups. Unlike other solutions, where the behaviors of the pedestrians are coded in the system, in our approach the agents learn by interacting with the environment. The embodied agents must learn to control their velocity, avoiding obstacles and the other pedestrians, to reach a goal inside the scenario. The main contribution of this paper is to propose this new methodology that uses different iterative learning strategies, combining a vector quantization (state space generalization) with the Q-learning algorithm (VQQL). Two algorithmic schemas, Iterative VQQL and Incremental, which differ in the way of addressing the problems, have been designed and used with and without transfer of knowledge. These algorithms are tested and compared with the VQQL algorithm as a baseline in two scenarios where agents need to solve well-known problems in pedestrian modeling. In the first, agents in a closed room need to reach the unique exit producing and solving a bottleneck. In in the second, two groups of agents inside a corridor need to reach their goal that is placed in opposite sides (they need to solve the crossing). In the first scenario, we focus on scalability, use metrics from the pedestrian modeling field, and compare with the Helbing's social force model. The emergence of collective behaviors, that is, the shell-shaped clogging in front of the exit in the first scenario, and the lane formation as a solution to the problem of the crossing, have been obtained and analyzed. The results demonstrate that the proposed schemas find policies that carry out the tasks, suggesting that they are applicable and generalizable to the simulation of pedestrians groups.
@article{Martinez-Gil2015a,
abstract = {{\textcopyright} 2014, The Author(s).In this paper, a new multi-agent reinforcement learning approach is introduced for the simulation of pedestrian groups. Unlike other solutions, where the behaviors of the pedestrians are coded in the system, in our approach the agents learn by interacting with the environment. The embodied agents must learn to control their velocity, avoiding obstacles and the other pedestrians, to reach a goal inside the scenario. The main contribution of this paper is to propose this new methodology that uses different iterative learning strategies, combining a vector quantization (state space generalization) with the Q-learning algorithm (VQQL). Two algorithmic schemas, Iterative VQQL and Incremental, which differ in the way of addressing the problems, have been designed and used with and without transfer of knowledge. These algorithms are tested and compared with the VQQL algorithm as a baseline in two scenarios where agents need to solve well-known problems in pedestrian modeling. In the first, agents in a closed room need to reach the unique exit producing and solving a bottleneck. In in the second, two groups of agents inside a corridor need to reach their goal that is placed in opposite sides (they need to solve the crossing). In the first scenario, we focus on scalability, use metrics from the pedestrian modeling field, and compare with the Helbing's social force model. The emergence of collective behaviors, that is, the shell-shaped clogging in front of the exit in the first scenario, and the lane formation as a solution to the problem of the crossing, have been obtained and analyzed. The results demonstrate that the proposed schemas find policies that carry out the tasks, suggesting that they are applicable and generalizable to the simulation of pedestrians groups.},
author = {Martinez-Gil, Francisco and Lozano, Miguel and Fern{\'{a}}ndez, Fernando},
doi = {10.1007/s10458-014-9252-6},
file = {:home/fernando/papers/tmp/10.1007{\%}2Fs10458-014-9252-6.pdf:pdf},
issn = {1387-2532},
journal = {Autonomous Agents and Multi-Agent Systems},
keywords = {Collective behaviors,MARL,Pedestrian simulation,VQQL},
month = {jan},
number = {1},
pages = {98--130},
title = {{Strategies for simulating pedestrian navigation with multiple reinforcement learning agents}},
url = {http://link.springer.com/10.1007/s10458-014-9252-6},
volume = {29},
year = {2015}
}

Downloads: 0