Skip to content

An interactive simulator to understand how control, planning, localization, and mapping work on a robot.

Notifications You must be signed in to change notification settings

Quest2GM/RobotVision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 

Repository files navigation

RobotVision

The goal of RobotVision is to simulate control, path-planning, localization and mapping of a non-holonomic robot in a 2D world. The topics explored here are well known in the robotics industry but I figured that this is a good opportunity explore them myself in more detail. Here, I attempt to provide the mathematical framework behind RobotVision.

The project was built from scratch using Python's Tkinter library. I found this the most versatile library in Python to create animations and it very much exceeded expectations.

As a side note, I adopt Prof. Gabriele D'Eleuterio's math symbol scheme for consistency and stylistic elegance.

What am I?

I am a robot made of six rectangles; one body, one head and four wheels. This is what I look like.

Bot

I can move forward and rotate about a point that is not my centroid, but not in reverse. With these maneuvers, I can traverse the entire 2D world. But I can't do it on my own; I only understand how to follow dark coloured markers on the ground.

I am equipped with a LIDAR sensor, one that can measure distance to objects and the relative angle of myself to them, as well as a colour sensor that tells me how close I am to the dark markers.

Let's help the robot achieve control, path planning, localization and mapping.

How do I move?

Defining the Robot's Model

The vehicle has three manuevers; rotate clockwise, rotate counter-clockwise and drive straight. Thus, the most logical control input is a vector with velocity, , and angular velocity, .

The trick to achieving angular control when the vehicle's foreign metric system is in pixels is in the rotation matrix. The Tkinter grid is positioned such that positive is to the right and down, whereas the conventional form is to the right and up. Thus, the conversion to the pixel reference frame comes down to a vertical flip. The CW and CCW 2D rotation matrices can be deduced as:

     

These rotation matrices can be applied to each of the "six rectangles" that the robot composes of to achieve rotation! Note that although the math to follow from here will ignore the discrepancy from the vertical flip, we must be mindful that we use the negative y-direction in all of our equations when we are actually programming this.

PID Control

If we want to setup a PID controller, we need two things; a variable to control and a defined path for the robot to follow. Since the velocity of our robot is constant, the only variable that needs control is our angular velocity. Also, we can define the path for the bot to follow by drawing on the Tkinter canvas itself, and the path can be stored in an array as a set of points.

There is one problem that we need to overcome - how do we know when to command the robot to turn clockwise or counterclockwise? In reality, the robot has a colour sensor on it so it can know whether to rotate right or left depending on where it detects the dark colour in its field of view. Doing this in simulation is very different since we don't actually use sensor readings. Here we make use of the cross product!

PID_cross_prod

At each iteration of our PID control loop, we find the point on the path that is the shortest distance away from the robot, and determine the vector that goes through this point and is tangent to the path. This is our 'b' vector. Then we find the vector that describes the heading of the robot, and this is our 'a' vector. To enable the use of the cross product, we add a third dimension to our vectors. If you can visualize the right hand rule, notice that computing will always be negative if the robot is to the left of the path, positive if it is to the right of the path, and zero if it is perfectly aligned with the path.

The state model with state, , for our system given our control input can be defined as the following:

We define , which basically is the correction factor (either +1 or -1) that identifies if the robot is moving clockwise or counterclockwise.

Implementing this in code is well known:

# The PID Control Loop
error = distance_from_path()
control = kp * error + ki * integral + kd * derivative
integral += error * dt
derivative = (error - last_error) / dt
last_error = error

We can easily modify the code block to integrate using the trapezoidal rule or the midpoint rule, as opposed to the standard Riemann sum to get better approximations of the integral (and likewise with the derivative, but using euler forward or Runge-Kutta, etc).

From here, after tuning our PID gains sufficiently enough of course, we can draw any path on the screen and the robot should be able to follow it!

PID

Lead-Lag Control

I remember in my Control Systems class that we were taught the lead-lag controller and how it can approximate a PI or PD controller, but is much more stable since it eliminates a lot of the problems that experience with the Integral or Derivative terms. But I never really understood how to actually implement this in code. So far, I haven't found any source code online that actually allow you to implement a lead-lag controller in a way similar to the PID controller - this section aims to fix that.

The PID controller can be described in the Laplace domain as:

The term means that we are integrating over the error, and this is made clear once we convert this back to the time domain using our Laplace transform tables. But let's do this in a slightly different way, using the convolution theorem. It tells us:

In the case of our PID integral term, lets set and . We also know that:

.

This leads to the following equation for our output:

Thus, all we need to do for our integral term is sum up the error over all time. As you may have noticed from the PID code block earlier, this is precisely why we continue to add the current integral term onto our previous integral term.

Let's try to apply the same idea to our lead lag controller, which can be described by the following in the Laplace domain:

This time, the term is our proportional controller, and becomes our integral controller. Now,

and,

where . This leads to the following equation for our output:

But, as you may have noticed, there's a small problem. Because we are multiplying two functions now and we are also time shifting g(t), the past multiplication terms will also change! This wasn't a problem earlier because in the PID case we had g(t) = g(t-tau) = 1 and we didn't have to consider this. To prevent us from having to keep a running history of our error terms, we can make use of a very convenient property of exponentials (credit to @gunvirranu for pointing this out)!

Expanding the error terms at time i,

as well as at time i+1,

it very nicely follows that:

And so, we are well equipped with implementing this in code:

# The Lead Lag Control loop
error = distance_from_path()
control = K * (error + (a-b) * integral)
integral = integral * np.exp(-b * dt) + error

And thats it! The simplicity of the lead-lag controller is hidden in the math.

LL

More investigation into how to properly tune the gains for the lead-lag controller needs to be done.

Discretizing the Transfer Function

In reality, digital systems are discrete and not continuous. It is no different here. We can use the bilinear transform to discretize our system, which has the added benefit of an accuracy boost since it is derived from the trapezoidal rule. As an example, let's use it to discretize our lead-lag controller.

Our continuous transfer function is:

Applying the bilinear transform:

Finally, determining the difference equation:

leading to:

Implementing this is simple as we can use the trick with our PID controller; we just keep track of the previous control output and error terms. I'll leave coding this up as an exercise to the viewer.

How do I get from here to there?

Dubin's Path Calculator

The idea behind Dubin's path is very simple; what is the shortest distance path between any two points on a 2D grid, given your vehicle has a minimum turning radius. Given that there are only six types of curves to consider according to Dubin, the problem really comes down to circle geometry. There are two types of curves; CSC = Curve, Straight, Curve and CCC = Curve, Curve, Curve. I set out to handle this graphically in Desmos before attempting to program it. I don't really understand why, but the amount of time I spent on these Desmos visualizations is probably about half in comparison to the amount of time I spent on this entire project.

CSC CCC

I implemented this in three steps:

  1. Draw all "pseudo-legal" Dubin's paths (ones do not consider vehicle's initial orientation) between start and end point.
  2. Eliminate illegal paths.
  3. Compute distance for all paths and identify minimum.

Rapidly Exploring Random Trees

TBD

Where am I?

Extended Kalman Filter (EKF)

As defined in the PID control section, our state update equation can be given as the following, this time with a state Gaussian noise term:

Since we are working in 2D, we require a 2D measurement model. Luckily, our LIDAR sensor can measure relative angle and distance, leading to the following measurement model:

where is the position of a LIDAR detectable object. Note that in implementation we use the atan2 function instead of atan to account for directionality. Given this, the Jacobian matrices with respect to state space and measurement space can be derived:

Finally, given an initial state covariance, , process noise, , and measurement noise, , we can apply the EKF algorithm:

EKF_alg

EKF

In the demo, the red circle represents a LIDAR detectable station, the black circular outline represents the detectable range of the station and the cyan trace is the predicted state. When the robot is outside of the detectable range, it cannot update the state with the Kalman gain since it doesn't receive any sensor readings, and so it predicts normally with without accounting for error. Notice how, when the robot enters the detectable range, it very visibly starts correcting the state and it eventually converges quite amazingly at the end, despite large deviations during traversal outside of the detectable range.

Simultaneous Localization and Mapping (SLAM)

The conversion from EKF to SLAM isn't a difficult one, so long as we are continuing to filter and not smooth. Pose graph optimization based SLAM, falling under the smoothing category, is more widely used in practice because it is more robust and less error prone to the limitations of the Kalman filter. Regardless, I decided to implement EKF based SLAM, because understanding it will help me grasp the fundamentals, and more importantly, better appreciate the need for something better.

The biggest difference in implementation from the EKF is the fact that we need to additionally track the positions of the objects we observe. In the case of a robot exploring an unknown environment, we wouldn't know the number of objects to keep track of beforehand and so we would need to constantly update the sizes of our matrices. For simplicity, let's assume that we know the number of objects that require tracking in our map. We need to keep track of 2N additional things - the x and y coordinates for each of our N objects. Our conversion matrix from our EKF state space to this higher order space can be defined as the following:

Next, we need some way of defining , but only to allow updates for objects in our current field of view. As such, we define the following matrices:

This is enough to implement EKF-SLAM:

SLAM_alg

SLAM

In the demo, the cyan trace represent the predicted state and the blue dots represent the predicted state of the landmarks. Despite large errors towards the end caused by no measurement update, the robot is able to quickly localize and converge on the landmark locations with reasonable accuracy.

Unscented Kalman Filter (UKF)

Since we have a highly non-linear model, the EKF tends to linearize in unwanted places and in general, the Jacobian matrix tends to under approximate the true state of the robot. As such, it only makes sense to explore the UKF since it eliminates the need for Jacobians and is almost always beats the EKF.

Here is the algorithm:

UKF_alg

There are still one or two bugs that need to be fixed with my current implementation, but I hope to have a demo out soon!

Particle Filter

TBD

What's Next?

I've put the remaining items on our list on hold for now, and I may come back to completing them in the future.

Progress

  • Dubin's Path Calculator
  • Rapidly Exploring Random Trees Algorithm
  • PID Control
  • Lead-Lag Control
  • Kalman Filter (Extended, Unscented)
  • Particle Filter
  • SLAM

Acknowledgements

  • Professor Gabriele D'Eleuterio: ROB301 lectures and primers
  • Professor Cyrill Stachniss: YouTube series on Kalman Filtering and SLAM
  • Professor Angela Schoellig: ROB310 lectures and class demos
  • Brian Douglas' Control System Lectures on YouTube

About

An interactive simulator to understand how control, planning, localization, and mapping work on a robot.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages