Skip to content

A compilation of recent machine learning papers focused on safe reinforcement learning

License

Notifications You must be signed in to change notification settings

Safe-RL-Team/topics-in-RL

Repository files navigation

🌟 This is a curated list of safe RL papers from 2017 to 2022. If you would like to contribute additional papers or update the list, please feel free to do so. https://safe-rl-team.github.io/topics-in-RL/

Our Journey of Reimplementing Safe RL Algorithms

Reimplementing state-of-the-art RL algorithms allows us to gain a deeper understanding of the algorithms' inner workings, and subsequently, explore novel and innovative approaches. Safe Reinforcement Learning is a cutting-edge field that holds immense potential for real-world applications.

During the course "Advanced Topics in Reinforcement Learning", we took on the challenge of reimplementing ideas from several recent safe RL papers.

Our findings and discussions are available as scientific blogs, with code re-implementations available on our GitHub repository (https://github.com/Safe-RL-Team).

Join us on an exciting journey of advancing the field of Safe RL!

  1. Safe Reinforcement Learning via Curriculum Induction, Matteo Turchetta, Andrey Kolobov, Shital Shah, Andreas Krause, and Alekh Agarwal

    📚 Blog Marvin Sextro, Jonas Loos

  2. Safe Reinforcement Learning with Natural Language Constraints, Tsung-Yen Yang, Michael Hu, Yinlam Chow, Peter J. Ramadge, and Karthik Narasimhan, NeurIPS 2021

    📚 Blog Hongyou Zhou

  3. Adversarial Policies: Attacking Deep Reinforcement Learning, Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, and Stuart Russell, ICLR 2020

    📚 Blog Lorenz Hufe, Jarek Liesen

  4. Reward constrained policy optimization, Chen Tessler, Daniel J. Mankowitz, and Shie Mannor, ICLR 2019

    📚 Blog Boris Meinardus, Tuan Anh Le

  5. Constrained Policy Optimization via Bayesian World Models, Yarden As, Ilnura Usmanova, Sebastian Curi and Andreas Krause, ICLR 2022

    📚 Blog Vincent Meilinger

  6. Constrained Policy Optimization, Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel, ICML 2017

    📚 Blog Thanh Cuong Le, Paul Hasenbusch

  7. Responsive Safety in Reinforcement Learning by PID Lagrangian Methods Adam, Adam Stooke, Joshua Achiam, and Pieter Abbeel, ICML 2020

    📚 Blog Wenxi Huang

  8. There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning, Nathan Grinsztajn, Johan Ferret, Olivier Pietquin, Philippe Preux, and Matthieu Geist, NeurIPS 2021

    📚 Blog Malik-Manel Hashim

  9. Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble, Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song, NeurIPS 2021

    📚 Blog Jonas Loos, Julian Dralle

  10. Learning Barrier Certificates: Towards Safe Reinforcement Learning with Zero Training-time Violations, Yuping Luo, and Tengyu Ma, NeurIPS 2021

    📚 Blog Lars Chen, Jeremiah Flannery

  11. Teachable Reinforcement Learning via Advice Distillation, Olivia Watkins, Trevor Darrell, Pieter Abbeel, Jacob Andreas, and Abhishek Gupta, NeurIPS 2021

    📚 Blog Mihai Dumitrescu, Claire Sturgill

  12. Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings, Jesse Zhang, Brian Cheung, Chelsea Finn, Sergey Levine, and Dinesh Jayaraman, ICML 2020

    📚 Blog Maren Eberle

  13. Verifiable Reinforcement Learning via Policy Extraction, Osbert Bastani, Yewen Pu, and Armando Solar-Lezama, NeurIPS 2018

    📚 Blog Christoph Pröschel

  14. Time Discretization-Invariant Safe Action Repetition for Policy Gradient Methods, Seohong Park, Jaekyeom Kim, and Gunhee Kim, NeurIPS 2021

    📚 Blog Hristo Boyadzhiev

Pushing Boundaries and Prioritizing Safety in RL

By implementing and exploring ideas from state-of-the-art papers, we can push the boundaries of what is possible and pave the way for even more effective and robust safe RL algorithms.

So, let's dive in and make the world a safer place, one policy at a time!

Reference

  1. García, J. and Fernandez, F., A Comprehensive Survey on Safe Reinforcement Learning Journal of Machine Learning Research, 2015
  2. Ray, A., Achiam, J. and Amodei, D., Benchmarking Safe Exploration in Deep Reinforcement Learning Open AI, 2019
  3. Kumar, A. and Levine, S, Offline Reinforcement Learning: From Algorithms to Practical Challenges NeurIPS Tutorial 2020