Skip to content

alexanderimanicowenrivers/rl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FOR.ai Reinforcement Learning Codebase

Modular codebase for reinforcement learning models training, testing and visualization.

Contributors: Bryan M. Li, David Tao, Alexander Cowen-Rivers, Siddhartha Rao Kamalakara, Nitarshan Rajkumar, Sourav Singh, Aidan N. Gomez

Features

Requirements

  • TensorFlow
  • OpenAI Gym
    • Atari pip install 'gym[atari]'
  • FFmpeg (apt install ffmpeg on Linux or brew install ffmpeg on macOS)

Quick Start

# start training
python train.py --sys ... --hparams ... --output_dir ...
# run tensorboard
tensorboard --logdir ...
# test agnet
python train.py --sys ... --hparams ... --output_dir ... --training False --render True

Hyper-parameters

Check init_flags(), defaults.py for default hyper-parameters, and check hparams/dqn.py agent specific hyper-parameters examples.

  • hparams: Which hparams to use, defined under rl/hparams
  • sys: Which system environment to use.
  • env: Which RL environment to use.
  • output_dir: The directory for model checkpoints and TensorBoard summary.
  • train_steps:, Number of steps to train the agent.
  • test_episodes: Number of episodes to test the agent.
  • eval_episodes: Number of episodes to evaluate the agent.
  • training: train or test agent.
  • copies: Number of independent training/testing runs to do.
  • render: Render game play.
  • record_video: Record game play.
  • num_workers, number of workers.

Contributing

We'd love to accept your contributions to this project. Please feel free to open an issue, or submit a pull request as necessary. Contact us team@for.ai for potential collaborations and joining FOR.ai.

About

Generic reinforcement learning codebase in TensorFlow

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%