Skip to content

katetolstaya/kernelrl

master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
cfg
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Kernel Reinforcement Learning

Dependencies

  • Python 2 or 3
  • OpenAI Gym, version 0.11.0
  • SciPy
  • MatPlotLib

Available algorithms

  • Kernel Q-Learning with:

    • Continuous states / discrete actions
    • Continuous states and actions from ACC 2018
  • Kernel Normalized Advantage Functions in continuous action spaces from IROS 2018

To run

Kernel Q-Learning with Pendulum with prioritized experience replay

python rlcore.py cfg/kq_pendulum_per.cfg

Kernel NAF with Continuous Mountain Car

python rlcore.py cfg/knaf_mcar.cfg

Other options of configuration files are

  • Kernel Q-Learning for Cont. Mountain Car: cfg/kq_cont_mcar.cfg
  • Kernel Q-Learning for Pendulum: cfg/kq_pendulum.cfg
  • Kernel Q-Learning for discrete-action Cartpole: cfg/kq_cartpole.cfg
  • Kernel NAF for Pendulum: cfg/knaf_pendulum.cfg

Composing policies

The compose folder contains the code for composing two or more trained policies as described in the IROS 2018 paper.

Tuning parameters

To tune learning rates and other parameters, adjust the corresponding parameters in the .cfg file.

Contributors

This software was created by Ekaterina Tolstaya, Ethan Stump, and Garrett Warnell.