Skip to content
This repository has been archived by the owner on Oct 20, 2022. It is now read-only.

Latest commit

 

History

History
23 lines (18 loc) · 1.57 KB

File metadata and controls

23 lines (18 loc) · 1.57 KB

Examples for Applied Reinforcement Learning: Playing Doom with TF-Agents and PPO

In this repository, we provide the code for our tutorial on applied reinforcement learning. We utilize TensorFlow's TF-Agents library to build a neural network agent capable of playing the video game Doom from pixels. To train the agent, Proximal Policy Optimization (PPO) is used.

For more details, have a look at our article.

Repository Contents

This repository contains the following main components:

  • ppo_train_eval_doom_simple.py: A minimal example on how to train Doom with TF-Agents and PPO.
  • ppo_train_eval_doom_extended.py: A full example on how to train Doom with TF-Agents and PPO. This also includes logging metrics of training performance with TensorBoard and saving checkpoints.
  • doom/DoomEnvironment.py: An implementation of TF-Agents' PyEnvironment mapping running a vizdoom instance and mapping actions and the observations for our agent.
  • basic.cfg: Configuration for the basic Doom scenario with adaptions to work with our setup.

Installation

Please refer to our instructions in the article.