Skip to content
PILCO policy search framework (Matlab version)
MATLAB TeX C
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
base removed added noise term accouting for measurement noise from propaga… Jul 19, 2016
control initial submit. Jul 11, 2016
doc
gp V0.9 Jul 11, 2016
loss initial submit. Jul 11, 2016
scenarios V0.9 Jul 11, 2016
test V0.9 Jul 11, 2016
util initial submit. Jul 11, 2016
LICENSE Initial commit Jul 11, 2016
README.md initial submit. Jul 11, 2016

README.md

PILCO Software Package V0.9 (2013-07-04)

I. Introduction This software package implements the PILCO RL policy search framework. The learning framework can be applied to MDPs with continuous states and controls/actions and is based on probabilistic modeling of the dynamics and approximate Bayesian inference for policy evaluation and improvement.

II. Quick Start We have already implemented some scenarios that can be found in /scenarios .

If you want to get started immediately, go to /scenarios/cartPole

and execute

cartPole_learn

III. Documentation A detailed documentation can be found in

/doc/doc.pdf

which also includes a description of how to set up your own scenario (there are only a few files that are scenario specific).

IV. Contact If you find bugs, have questions, or want to give us feedback, please send an email to m.deisenroth@imperial.ac.uk

V. References M.P. Deisenroth and C.E. Rasmussen: PILCO: A Data-Efficient and Model-based Approach to Policy Search (ICML 2011) M.P. Deisenroth: Efficient Reinforcement Learning Using Gaussian Processes (KIT Scientific Publishing, 2010)

Marc Deisenroth Andrew McHutchon Joe Hall Carl Edward Rasmussen

You can’t perform that action at this time.