Code to reproduce experiments from "User-Interactive Offline Reinforcement Learning" (ICLR 2023)
-
Updated
Apr 14, 2023 - Python
Code to reproduce experiments from "User-Interactive Offline Reinforcement Learning" (ICLR 2023)
Implemenation of CORL for Fetch and Unitree A1 tasks
Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023
Offline to Online RL: AWAC & IQL PyTorch Implementation
Code for Undergrad Final Year Project “Offline Risk-Averse Actor-Critic with Curriculum Learning”
Official code for paper: Conservative objective models are a special kind of contrastive divergence-based energy model
Need 4 Speed, FYP 2023-24 @ Monash.
オフライン強化学習用フレームワーク及びSCQL,SCQL+Dの実装
Package for recording Transitions in OpenAI Gym Environments.
Codes accompanying the paper "On the Role of Discount Factor in Offline Reinforcement Learning" (ICML 2022)
Author's repository for GSM8K-AI-SubQ reasoning dataset
🧠 Learning World Value Functions without Exploration
Summarising the research of Offline RL in Federated Setting.
Python library for solving reinforcement learning (RL) problems using generative models (e.g. Diffusion Models).
PyTorch Implementation of Offline Reinforcement Learning algorithms
Direct port of TD3_BC to JAX using Haiku and optax.
Code for NeurIPS 2023 paper Accountability in Offline Reinforcement Learning: Explaining Decisions with a Corpus of Examples
D2C(Data-driven Control Library) is a library for data-driven control based on reinforcement learning.
The Medkit-Learn(ing) Environment: Medical Decision Modelling through Simulation (NeurIPS 2021) by Alex J. Chan, Ioana Bica, Alihan Huyuk, Daniel Jarrett, and Mihaela van der Schaar.
Clean single-file implementation of offline RL algorithms in JAX
Add a description, image, and links to the offline-rl topic page so that developers can more easily learn about it.
To associate your repository with the offline-rl topic, visit your repo's landing page and select "manage topics."