Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Working directory for my work on model-based reinforcement learning for novel robots. Best for robots with high test cost and difficult to model dynamics. Contact: First paper website: There is current future work using this library, such as attempting to control the Ionocraft with model-based RL.

Note that I have been very actively developing in this repo, please reach out if you have any questions of accuracy in the readme.

This directory is working towards an implementation of many simulated model-based approaches on real robots. For current state of the art in simulation, see this work from Prof Sergey Levine's group: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.

Future implementations work towards controlled flight of the ionocraft, with a recent publication in Robotics and Automation Letters and in the future for transfer learning of dynamics on the Crazyflie 2.0 Platform.

Some potentially noteable implementations include:

  • probablistic nueral network in pytorch
  • gaussian loss function for said pytorch probablistic neural network
  • random shooting MPC implementation with customizable cost / reward function (See cousin repo:

Usage is generally of the form, with hydra enabling more options:

$ python learn/ robot=iono

Main Scripts:

  • learn/ is for training dynamics models (P,PE,D,DE) on experimental data. The training process uses Hydra to allow easy configuration of which states are used and how the predictions are formatted.
  • learn/ a script that runs MBRL with a MPC on a simulated environment.
  • learn/ For generating PID parameters using a dynamics model as a simulation environment. This will eventually extend beyond PID control. See the controllers directory learn/control. I am working to integrate opto.
  • learn/ a script for viewing different types of predictions, under improvement

In Development:

Related Code for Experiments:

CF Firmware:

Ros code:


Working directory for dynamics learning for experimental robots.








No releases published


No packages published