[ICLR 2024] The official implementation of "Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model"
-
Updated
Jun 23, 2024 - Python
[ICLR 2024] The official implementation of "Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model"
OmniSafe is an infrastructural framework for accelerating SafeRL research.
Safe Multi-Agent Robosuite benchmark for safe multi-agent reinforcement learning research.
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Feasibility Consistent Representation Learning for Safe Reinforcement Learning (ICML 2024)
NeurIPS 2023: Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark
Multi-Agent Constrained Policy Optimisation (MACPO; MAPPO-L).
ICLR 2024: SafeDreamer: Safe Reinforcement Learning with World Models
NeurIPS 2023: Safe Policy Optimization: A benchmark repository for safe reinforcement learning algorithms
This repo contains the novel implementation for learning viability kernel of dynamical systems.
Open-source reinforcement learning environment for autonomous racing — featured as a conference paper at ICCV 2021 and as the official challenge tracks at both SL4AD@ICML2022 and AI4AD@IJCAI2022. These are the L2R core libraries.
A Multiplicative Value Function for Safe and Efficient Reinforcement Learning. IROS 2023.
Code for L4DC 2022 paper: Joint Synthesis of Safety Certificate and Safe Control Policy Using Constrained Reinforcement Learning.
This repository has code for the paper "Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm" accepted at NeurIPS 2022.
The Source code for paper "Optimal Energy System Scheduling Combining Mixed-Integer Programming and Deep Reinforcement Learning". Safe reinforcement learning, energy management
Code for "Constrained Variational Policy Optimization for Safe Reinforcement Learning" (ICML 2022)
Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk (IJCAI 2022)
Safe Multi-Agent Isaac Gym benchmark for safe multi-agent reinforcement learning research.
[IROS 22'] Model-free Neural Lyapunov Control
Add a description, image, and links to the safe-reinforcement-learning topic page so that developers can more easily learn about it.
To associate your repository with the safe-reinforcement-learning topic, visit your repo's landing page and select "manage topics."