Pytorch implementation of BEAR in "Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction"
-
Updated
Oct 29, 2019 - Python
Pytorch implementation of BEAR in "Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction"
Implementation of CQL in "Conservative Q-Learning for Offline Reinforcement Learning" based on BRAC family.
Implementation of Fisher_BRC in "Offline Reinforcement Learning with Fisher Divergence Critic Regularization" based on BRAC family.
Code for ICLR 2022 paper Rethinking Goal-Conditioned Supervised Learning and Its Connection to Offline RL.
Implementation of Offline Reinforcement Learning in Gym Mini-Grid Environment 🔑
Minimal implementation of Decision Transformer: Reinforcement Learning via Sequence Modeling in PyTorch for mujoco control tasks in OpenAI gym
[AAAI 2022] The official implementation of "DeepThermal: Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning"
Official PyTorch implementation of "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble" (NeurIPS'21)
The easiest way to copy your flight log files and videos from racing drones and goggles DVR.
Python interface for accessing the near real-world offline reinforcement learning (NeoRL) benchmark datasets
[ICML 2022] The official implementation of DWBC in "Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations"
Implementation of Robust Reinforcement Learning using Offline Data [NeurIPS'22]
[AAAI 2022] The official implementation of CPQ in "Constraints Penalized Q-learning for Safe Offline Reinforcement Learning"
Official implementation for "Let Offline RL Flow: Training Conservative Agents in the Latent Space of Normalizing Flows", NeurIPS 2022, Offline RL Workshop
Offline Reinforcement Learning Framework in JAX
Official implementation for "Anti-Exploration by Random Network Distillation", ICML 2023
Code for NeurIPS 2022 paper "Robust offline Reinforcement Learning via Conservative Smoothing"
Official code repo for paper: Hybrid RL: Using both offline and online data can make RL efficient.
Official implementation for "Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size", NeurIPS 2022, Offline RL Workshop
"S2P: State-conditioned Image Synthesis for Data Augmentation in Offline Reinforcement Learning" (NeurIPS 2022)
Add a description, image, and links to the offline-reinforcement-learning topic page so that developers can more easily learn about it.
To associate your repository with the offline-reinforcement-learning topic, visit your repo's landing page and select "manage topics."