Skip to content

indirected/gym-collision-avoidance

 
 

Repository files navigation

gym-collision-avoidance

Comparison of PA-CADRL and GA3C-CADRL in a 50 agent circular scenario

This is the code associated with the following publications:

PA-CADRL: Mohammad Bahrami Karkevandi, Samaneh Hosseini Semnani, "Multi-Agent Collision Avoidance with Provident Agents using Deep-Reinforcement Learning"[preprint]

In this paper/repo we add providence to the agents by using their relative velocities (in a sense) and detect which agents are moving aggressively toward each other to penalize them. This repo adds this capability to the original code, nothing is removed. Additionally, the rewarding mechanism has been improved to favor penalizing collisions prior to rewarding goal achievements.

Please note that this repo and the rl training repo are both a fork of previous work of a different author. please make sure to cite them as well when required.

The repo contains the trained PA-CADRL policy introduced in the paper above.

Our work is based on the previous work mentioned below:

Journal Version: M. Everett, Y. Chen, and J. P. How, "Collision Avoidance in Pedestrian-Rich Environments with Deep Reinforcement Learning", IEEE Access Vol. 9, 2021, pp. 10357-10377. 10.1109/ACCESS.2021.3050338, Arxiv PDF

Conference Version: M. Everett, Y. Chen, and J. P. How, "Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018. Arxiv PDF, Link to Video

This repo also contains the trained policy for the SA-CADRL paper (referred to as CADRL here) from the proceeding paper: Y. Chen, M. Everett, M. Liu, and J. P. How. “Socially Aware Motion Planning with Deep Reinforcement Learning.” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vancouver, BC, Canada, Sept. 2017. Arxiv PDF

If you're looking to train our PA-CADRL policy or the previous GA3C-CADRL policy, please see this repo instead.


About the Code

Please see the documentation!

If you find this code useful, please consider citing:

@inproceedings{Everett18_IROS,
  address = {Madrid, Spain},
  author = {Everett, Michael and Chen, Yu Fan and How, Jonathan P.},
  booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  date-modified = {2018-10-03 06:18:08 -0400},
  month = sep,
  title = {Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning},
  year = {2018},
  url = {https://arxiv.org/pdf/1805.01956.pdf},
  bdsk-url-1 = {https://arxiv.org/pdf/1805.01956.pdf}
}

or

@article{everett2021collision,
  title={Collision avoidance in pedestrian-rich environments with deep reinforcement learning},
  author={Everett, Michael and Chen, Yu Fan and How, Jonathan P},
  journal={IEEE Access},
  volume={9},
  pages={10357--10377},
  year={2021},
  publisher={IEEE}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • OpenEdge ABL 89.7%
  • Python 10.3%