Skip to content

Our MAV Stack

Zachary Taylor edited this page Oct 11, 2018 · 7 revisions

Prerequisites

To get up and running with our pipeline we assume you have:

  • Hardware similar to that outlined in the Basic MAV Hardware page
  • All your sensors intrinsics and extrinsics well calibrated. See our calibration page
  • Excellent timesync (mentioned on every page on the wiki, because nothing works without it)
  • Solid knowledge of C++, ROS, PX4, Sensor calibration, SLAM, MPC, and general MAV systems
  • A few hundred hours for debugging, as nothing will be plug-and-play

State-Estimation

To do anything useful with an MAV it generally needs to know it's 6-dof pose. We make use of visual inertial sensing and our recommenced solution for pose estimation is Rovio

Pros

  • Very lightweight leaving CPU for other tasks
  • Robust to motion blur, dynamic scenes and poor illumination/texture
  • Simple fusion of an additional pose sensor

Cons

  • Very sensitive to poor timesync
  • No loop-closure
  • Can be difficult to tune

Other solutions:

  • Ros VRPN Client: A ROS interface for a motion capture system. Onboard odometry estimation is hard, don't do it if you don't have to.
  • Rovioli: Rovio front-end with loop closure via Maplab, more CPU heavy but also provides some additional features such as automatic Rovio health monitoring.
  • MSCKF-VIO: EKF approach that uses stereo data and a decoupled feature tracker
  • VINS-Mono: Pose graph based visual inertial estimator, offers loop closure, timeoffset estimation and a few other nice features. Requires some movement to initialize and uses more CPU than the above approaches, but generally considered the most accurate of the listed VI approaches.
  • ORB-SLAM2: Very accurate camera only pose graph approach. A bit of a CPU hog that gets lost when turning fast. Great for offline processing, but I don't ever want to fly with it in the loop.

Lag Compensation

State estimation isn't free and delays add up. The poses given by the above estimators can be over 100ms out of date, which is more then enough to cause constructive oscillations and put your system into a wall. Luckily doubly integrating the IMU can be trusted for this scale of time. In these cases Odom Predict can be used to give low latancy estimates.

However, sometimes the estimator doesn't provide odometry estimates, the calibration is unknown or you need to integrate in a new IMU (the skybotics VI-sensor bundles IMU messages into batches of 10, making it unsuited for low latency estimates). Here you need to go beyond prediction and do sensor fusion. The goto tool for this is Multi-Sensor Fusion.

Control

While many autopilots provide their own internal position controllers we abstract this level of control to the onboard PC. This allows the use of a more powerful model predictive control (MPC) approach given in MAV Control RW which includes disturbance rejection to allow for offset free tracking in the presence of external disturbances such as wind. The approach takes in the systems odometry and a desired trajectory and outputs the desired attitude and thrust. For more information on how to obtain the parameters needed for control see the Controller Tuning page

Mapping

To build accurate 3D maps of the area surrounding area we use Voxblox

Pros

  • Designed specifically to work for onboard MAV applications
  • No GPU requirements
  • Explicit freespace mapping and automatic ESDF generation for easy integration with planners
  • Tight ROS integration only needing a TF tree and pointcloud inputs
  • Realtime on MAV hardware, with CPU left for other tasks
  • Incremental mesh transmission with visualization in RVIZ

Cons

  • Big blocky voxels
  • Cubic scaling with voxel size

Other solutions:

  • Volumetric Mapping: A ros node wrapping Octomap a multi-resolution 3D probabilistic map.
  • Elastic Fusion: Needs a powerful desktop to hit real-time, but a damn pretty output. Uses Surfels and so gives no direct measure of a volumes occupancy likelihood. It also generates its own state estimate.
  • Infinitam: A TSDF mapping framework that supports small voxels. Again, no notation of freespace is maintained.

Simulation

To verify approaches and test without fear of damaging the system we make use of the RotorS simulator. Note Voxblox also has a ground-truth TSDF/ESDF simulator for environments composed of simple geometric shapes.