Computational framework for reinforcement learning in traffic control
Clone or download
Latest commit ab7ee79 Jan 16, 2019

README.md

Build Status Docs Coverage Status License

Flow

Flow is a computational framework for deep RL and control experiments for traffic microsimulation.

See our website for more information on the application of Flow to several mixed-autonomy traffic scenarios. Other results and videos are available as well.

More information

Getting involved

We welcome your contributions.

Citing Flow

If you use Flow for academic research, you are highly encouraged to cite our paper:

C. Wu, A. Kreidieh, K. Parvate, E. Vinitsky, A. Bayen, "Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control," CoRR, vol. abs/1710.05465, 2017. [Online]. Available: https://arxiv.org/abs/1710.05465

If you use the benchmarks, you are highly encouraged to cite our paper:

Vinitsky, E., Kreidieh, A., Le Flem, L., Kheterpal, N., Jang, K., Wu, F., ... & Bayen, A. M. (2018, October). Benchmarks for reinforcement learning in mixed-autonomy traffic. In Conference on Robot Learning (pp. 399-409).

Contributors

Cathy Wu, Eugene Vinitsky, Aboudy Kreidieh, Kanaad Parvate, Nishant Kheterpal, Kathy Jang, Fangyu Wu, Mahesh Murag, Kevin Chien, and Jonathan Lin. Alumni contributors include Leah Dickstein, Ananth Kuchibhotla, and Nathan Mandi. Flow is supported by the Mobile Sensing Lab at UC Berkeley and Amazon AWS Machine Learning research grants.