Working directory for my work on model-based reinforcement learning for novel robots. Best for robots with high test cost and difficult to model dynamics. Contact: firstname.lastname@example.org Project website: https://sites.google.com/berkeley.edu/mbrl-quadrotor/
This directory is working towards an implementation of many simulated model-based approaches on real robots. For current state of the art in simulation, see this work from Prof Sergey Levine's group: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.
Future implementations work towards controlled flight of the ionocraft, with a recent publication in Robotics and Automation Letters and in the future for transfer learning of dynamics on the Crazyflie 2.0 Platform.
Some potentially noteable implementations include:
- probablistic nueral network in pytorch
- gaussian loss function for said pytorch probablistic neural network
- random shooting MPC implementation with customizable cost / reward function
For current state of the art, as I said see K. Chua et al.. This paper covers the design choices between deterministic and probablistic neural networks for learning, along with a discussion of ensemble learning. It then covers a new MPC technique needed for higher state systems coined Trajectory Sampling. Especially for our goal of implementing this on real robots, some other recent papers that cover their own implementations can prove more useful, such as Bansal et al. learning trajectories on the CrazyFlie or Nagabundi et al. with millirobots. A more theoretical framework for model-based learning includes the PILCO Algorithm and .... will update with what I feel is relevant.
For some general reinforcement learning references, see a lecture series by Deepmind's David Silver, the Deep RL Bootcamp 2017 Lectures from various researchers, the Deep Learning Book from Goodfellow & MIT, or Berkeley's own Deep RL Course