This repo contains the novel implementation for learning viability kernel of dynamical systems.
-
Updated
Jan 25, 2024 - Python
This repo contains the novel implementation for learning viability kernel of dynamical systems.
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Code for the paper Learning Stable Normalizing-Flow Control for Robotic Manipulation, IEEE ICRA, 2021
Code for the paper Stability-guaranteed reinforcement learning for contact-rich manipulation, IEEE RA-L, 2020.
Code for the paper Learning Deep Energy Shaping Policies for Stability-Guaranteed Manipulation, IEEE RA-L, 2021
Feasibility Consistent Representation Learning for Safe Reinforcement Learning (ICML 2024)
Safe Policy Optimization with Local Features
Reinforcement Learning Course Project - IIT Bombay Fall 2018
Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk (IJCAI 2022)
Code for L4DC 2022 paper: Joint Synthesis of Safety Certificate and Safe Control Policy Using Constrained Reinforcement Learning.
Safe Multi-Agent Robosuite benchmark for safe multi-agent reinforcement learning research.
[IROS 22'] Model-free Neural Lyapunov Control
A Multiplicative Value Function for Safe and Efficient Reinforcement Learning. IROS 2023.
[Humanoids 2022] Learning Collision-free and Torque-limited Robot Trajectories based on Alternative Safe Behaviors
This repository has code for the paper "Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm" accepted at NeurIPS 2022.
Training (hopefully) safe agents in gridworlds
Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an implementation of SAC + Robust Control Barrier Functions (RCBFs) for safe reinforcement learning in two custom environments
Implementation of PPO Lagrangian in PyTorch
LAMBDA is a model-based reinforcement learning agent that uses Bayesian world models for safe policy optimization
Add a description, image, and links to the safe-reinforcement-learning topic page so that developers can more easily learn about it.
To associate your repository with the safe-reinforcement-learning topic, visit your repo's landing page and select "manage topics."