Skip to content
/ SLAM Public

Computer Vision Nanodegree, Udacity, Project_3

License

Notifications You must be signed in to change notification settings

Antanskas/SLAM

Repository files navigation

Landmark Detection & Robot Tracking (SLAM)

Project Overview

This project is all about implementation SLAM (Simultaneous Localization and Mapping) for a 2 dimensional world. Goal is to combine what we know about robot sensor measurements and movement to create a map of an environment from only sensor and motion data gathered by a robot, over time. SLAM gives us a way to track the location of a robot in the world in real-time and identify the locations of landmarks such as buildings, trees, rocks, and other world features. This is an active area of research in the fields of robotics and autonomous systems.

Below is an example of a 2D robot world with landmarks (purple x's) and the robot (a red 'o') located and found using only sensor and motion data collected by that robot. This is just one example for a 50x50 grid world; in your work you will likely generate a variety of these maps.

The project will be broken up into three Python notebooks; the first two are for exploration of provided code, and a review of SLAM architectures, only Notebook 3 and the robot_class.py file will be graded:

Notebook 1 : Robot Moving and Sensing

Notebook 2 : Omega and Xi, Constraints

Notebook 3 : Landmark Detection and Tracking

robot_class.py : Robot class with its world parameters

helpers.py : helper functions for making data, world display

Local Environment Instructions

  1. Clone the repository, and navigate to the downloaded folder.
git clone https://github.com/Antanskas/SLAM.git
  1. Create (and activate) a new environment, named cv-nd with Python 3.6. If prompted to proceed with the install (Proceed [y]/n) type y.

    • Linux or Mac:
    conda create -n cv-nd python=3.6
    source activate cv-nd
    
    • Windows:
    conda create --name cv-nd python=3.6
    activate cv-nd
    
  2. Install a few required pip packages, which are specified in the requirements text file (including OpenCV).

pip install -r requirements.txt

To implement Graph SLAM, a matrix and a vector (omega and xi, respectively) are introduced. The matrix is square and labelled with all the robot poses (xi) and all the landmarks (Li).

Graph SLAM

Every time we make an observation, for example as we move between two poses by some distance dx and can relate those two positions, you can represent this as a numerical relationship in these matrices.

We are referring to robot poses as Px, Py and landmark positions as Lx, Ly, and one way to approach this challenge is to add both x and y locations in the constraint matrices.

Grid world

This is a map our robot moves in and landmarks it estimates. Given ground true robot poses (coordinates after each move) and ground true coordinates of the landmarks on a map we can compare how our robot estimates its own poses and landmark coordinates using SLAM.

Final estimated pose: (4.415016684320619, 72.87959118795443)
True final pose: (3.35286, 71.62254)

Estimated landmarks:
(8.019, 35.446)
(19.882, 41.192)
(45.504, 0.673)
(17.149, 90.208)
(87.297, 32.854)

True landmarks:
(9, 36)
(21, 41)
(47, 1)
(18, 91)
(88, 33)

Errors over N - steps taken

From the test took in Notebook 3 we can also see that errors between true and estimated poses as well as landmarsk positions do not really change:

LICENSE: This project is licensed under the terms of the MIT license.

Releases

No releases published

Packages

No packages published