Skip to content

A simulation of a set of traffic lights using reinforcement learning to help the flow of traffic

Notifications You must be signed in to change notification settings

ZaneLittle/Traffic-Light-Simulation

Repository files navigation

Traffic-Light-Simulation

  • Cameron Raymond - Computer Science, Queen's University
  • Hugh Corley - Applied Mathematics, Queen's University
  • Leonard Zhao - Biomedical Computing, Queen's University
  • Nicolas Wlodek - Cognitive Computing, Queen's University
  • Ross Hill - Software Design, Queen's University
  • Zane Little - Cognitive Computing, Queen's University

Q-Learning for Traffic Signal Control

Gif broke!

Performance Over Time

Loop Route

Loop Softmax Over Time

Normal Route

Normal Route Over Time

Dependencies

  • pip3 install -r requirements.txt

Config Values That You Can Customize

  1. Set the file that you'd like to save the Q-Table to in config.
  2. Set the Q-Table to load from (if you want to train from scratch you'll have the option to from command line).
  3. Set the number of years and day per years (default is 1 year of 10 days).
  4. Set the policy that you'd like to use (egreedy or softmax)
  5. Set the environment dynamics in the config with ENV_CONSTANTS["ROUTE"] (either normal, loopy or simpleLoopy)

To Run

  • Train/Plot - python3 Main.py
  • Visualize - python3 Visualizer.py (this will also train but much slower due to as it has to render each new state)

About

A simulation of a set of traffic lights using reinforcement learning to help the flow of traffic

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages