Tuesday 16:35–17:05 in Main Track

Hitting the gym: controlling traffic with Reinforcement Learning

Steven Nooijen

Audience level:
Intermediate

Description

Finally a good real-life use case for Reinforcement Learning (RL): traffic control! In this talk I will show you how we hooked up traffic simulation software to Python and how we built our own custom gym environment to run RL experiments with keras-rl for a simple 4-way intersection.

Abstract

Traffic congestion causes unnecessary delay, pollution and increased fuel consumption. Learning-based traffic control algorithms have recently been explored as an alternative to existing traffic control logics, which are often manually configured and therefore not optimal. In this talk, I will demo how we trained a Reinforcement Learning (RL) algorithm for traffic control and share with you some of our learnings and best practices in doing so.

The session will start with a conceptual understanding of Reinforcement Learning and how algorithms like (Deep) Q-learning work. I will then explain why this is relevant for traffic control, after which I will zoom in on OpenAI and how to build your own custom gym environment. With such an environment you can easily tap into existing keras-rl algorithms, which will speed up your RL project significantly.

In our case, connecting the gym to the traffic simulation software wasn't trivial. Therefore, there is also a short note on the use of multiprocessing and blocking queues to enable the reinforcement learning agent to gain control of the simulation software.

Subscribe to Receive PyData Updates

Subscribe