Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
The cozmo_tools package includes a particle filter for localization. ArUco markers serve as landmarks; support for cube landmarks is coming soon.
The file cozmo_fsm/examples/PF_Aruco.py provides a runnable demo of the particle filter using four landmarks arranged as shown in the figure below:
The particles start out with random locations and headings:
There are three sensor models available: a distance-only model, a bearing-only model, and a combined distance and bearing model (the default). Let's consider the distance-only model first. When only landmark 2 is visible, repeated resampling results in the particles arranged on a ring at the perceived distance, with no information about heading (since we're assuming the robot's perception is omnidirectional). The robot's estimate location (blue triangle) is the weighted average of all the particles, and does not lie on the ring! Also its heading is random since we don't have enough information to determine heading.
If only landmark 3 is visible, as in the image below, we get a similar ring centered around that landmark. Again, there is no heading information:
However, once the robot moves, each particle's heading affects its predictions about how the landmarks move. Particles with correct headings will make better predictions and thus be weighted more highly. After resampling, the particles will cluster more tightly around the correct location and correct heading value.
In the images below you can see this effect as the robot moves forward while looking at just landmark 3.
Extra landmarks are helpful. When both landmarks 2 and 3 are visible it is possible to narrow down the robot's position more effectively. But as long as we assume that perception is omnidirectional we still have no heading information:
Again, motion can provide additional constraints that lead to a good localization with two landmarks.
Similar demonstrations can be made for the bearing-only sensor model. When distance information isn't available we cannot narrow down the robot's position using a single landmark, but with the additional information provided by motion we can force each particle's heading to be consistent with its location.
With the combined distance and bearing model we can get tight position and heading estimates using landmarks 2 and 3 with just a little bit of motion:
You can try these experiments yourself using this marker file.