Reinforcement Learning Course Project - IIT Bombay Fall 2018
-
Updated
Nov 25, 2018 - Python
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Training (hopefully) safe agents in gridworlds
The Verifiably Safe Reinforcement Learning Framework
Safe Policy Optimization with Local Features
Code for the paper Stability-guaranteed reinforcement learning for contact-rich manipulation, IEEE RA-L, 2020.
Code for the paper Learning Deep Energy Shaping Policies for Stability-Guaranteed Manipulation, IEEE RA-L, 2021
Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments
Safe Pontryagin Differentiable Programming (Safe PDP) is a new theoretical and algorithmic safe differentiable framework to solve a broad class of safety-critical learning and control tasks.
Code for the paper Learning Stable Normalizing-Flow Control for Robotic Manipulation, IEEE ICRA, 2021
Implementation of PPO Lagrangian in PyTorch
[Humanoids 2022] Learning Collision-free and Torque-limited Robot Trajectories based on Alternative Safe Behaviors
LAMBDA is a model-based reinforcement learning agent that uses Bayesian world models for safe policy optimization
[IROS 22'] Model-free Neural Lyapunov Control
Safe Multi-Agent Isaac Gym benchmark for safe multi-agent reinforcement learning research.
Code for "Constrained Variational Policy Optimization for Safe Reinforcement Learning" (ICML 2022)
The Source code for paper "Optimal Energy System Scheduling Combining Mixed-Integer Programming and Deep Reinforcement Learning". Safe reinforcement learning, energy management
This repository has code for the paper "Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm" accepted at NeurIPS 2022.
Code for L4DC 2022 paper: Joint Synthesis of Safety Certificate and Safe Control Policy Using Constrained Reinforcement Learning.
A Multiplicative Value Function for Safe and Efficient Reinforcement Learning. IROS 2023.
Add a description, image, and links to the safe-reinforcement-learning topic page so that developers can more easily learn about it.
To associate your repository with the safe-reinforcement-learning topic, visit your repo's landing page and select "manage topics."