Skip to content
/ BERL.jl Public

Benchmarking Evolutionary Reinforcement Learning

License

Notifications You must be signed in to change notification settings

d9w/BERL.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BERL.jl

Build Status Coverage Status

Benchmarking Evolutionary Reinforcement Learning (pronounced "barrel"... sort of)

A collaborative project for aggregating benchmarks of evolutionary algorithms on common reinforcement learning benchmarks, based on Cambrian.jl.
Contribution guidelines are available here.

Features

Implemented

Algorithms:

  • NEAT
  • CGP

Environments:

  • Iris classification
  • XOR
  • Gym classic control
  • Atari on RAM

Future additions

Algorithms:

  • HyperNEAT (2xfeedforward & recurrent ANNs)
  • CMA-ES (2xfeedforward & recurrent ANNs)
  • population-based REINFORCE
  • AGRN
  • TPG
  • grammatical evolution

Environments:

  • mujoco
  • pybullet
  • mario

Other ideas to integrate

Customizable fitness:

  • sum of reward over an episode
  • Novelty?
  • MAP-Elites

CLI interaction

  • Parseable arguments

Non-Cambrian algorithms

  • Interaction through a simplified interface with the BERL environments

Run instructions

To run a selection of algorithms on BERL benchmarks, please:

  1. Toggle the algorithms and environments you want in the YAML config files.
  2. Run
run_berl()

To only run a pair of algorithm and environment, you can also use:

start_berl(algo_name::String, env_name::String; env_params...)

env_params are the specific game names (such as "CartPole-v1") when "atari" or "gym" are selected as env_name.

About

Benchmarking Evolutionary Reinforcement Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages