StarCraft II Learning Environment
-
Updated
Jul 23, 2024 - Python
StarCraft II Learning Environment
JAX reimplementation of the DeepMind paper "Genie: Generative Interactive Environments"
Modular reinforcement learning library (on PyTorch and JAX) with support for NVIDIA Isaac Gym, Omniverse Isaac Gym and Isaac Lab
OpenAI Gym wrapper for the DeepMind Control Suite
📖 Paper: Deep Reinforcement Learning with Double Q-learning 🕹️
📖 Paper: Human-level control through deep reinforcement learning 🕹️
Implementing DeepMind's Fast Reinforcement Learning paper, and adding additional features to generalize the algorithms
Gymnasium integration for the DeepMind Control (DMC) suite
Farama Gymnasium API Wrapper for the DeepMind Control Suite and DeepMind Robot Manipulation Tasks
I’ll be testing different Gemma models and sharing the results here and on my Hugging Face space. Stay tuned for updates!
NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/
MLP-Mixer: An all-MLP Architecture for Vision
This project implements an AI that learns the Snake game through Deep Q-Learning. It uses Fast Forward and CNN-based training to learn the optimal game strategy and visualises the learning process.
Construction of controllers for Shadow-Hand in Mujoco environment, using Deep Learning. 2 Different methods were used to create the controllers: a) Behavioral Cloning b) Deep Reinforcement Learning
Scalable distributed reinforcement learning agents on kubernetes
The code for the famous DQN paper applied on Atari's Breakout.
Applying DeepMind's MuZero algorithm to the cart pole environment in gym
Lernd is ∂ILP (dILP) framework implementation based on Deepmind's paper Learning Explanatory Rules from Noisy Data.
Add a description, image, and links to the deepmind topic page so that developers can more easily learn about it.
To associate your repository with the deepmind topic, visit your repo's landing page and select "manage topics."