Skip to content

adheeshc/Visual-Odometry

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Visual-Odometry

PROJECT DESCRIPTION

The aim of this project is to implement the different steps to estimate the 3D motion of the camera and provide as output a plot of the trajectory of a car driving around the city. As the car moves around the city, we track the change in position of the camera with respective to the initial point.

Please refer to Project Report for further description

Preparing the Input

pre

The dataset used is the Oxford Dataset courtesy of Oxford’s Robotics Institute which if downloaded directly requires further pre-processing

  • The input images are in Bayer format which have to be converted to RGB scale
  • The image has to be undistorted

However, to speed up the processing, I have already done the same and saved them in a folder FRAMES which can be taken direclty from the folder Datasets. I have also converted to grayscale as it is easier to process in one channel.

post

Fundamental Matrix Estimation

SIFT

  • SIFT algorithm is used to detect keypoints
  • Point correspondences are found between successive frames using the 8-point algorithm
  • Normalizing all the points around the mean of the points and enclose them at a distance of √2 from the new center location
  • The best Fundamental matrix is found using the RANSAC algorithm

Camera Pose Estimation

  • Essential m atrix is calculated from the Fundamental matrix accounting for the Camera Calibration Parameters.
  • The Essential matix is decomposed into 4 possible Translations and Rotations pairs

Triangulation Check

my_code

The correct T and R pair is found from depth positivity. I choose the R and T which gives the largest amount of positive depth values.

The values are saved in a csv file updated2.csv

Built-in Check

Built_in

Finally, the results are compared to against the rotation/translation parameters recovered using the cv2.findEssentialMat and cv2.recoverPose from opencv.The final trajectory for both methods are plotted compared.

The values are saved in a csv file points.csv

Final Output

video

DEPENDANCIES

  • Python 3
  • OpenCV
  • Numpy
  • Glob
  • Matplotlib
  • Copy (built-in)

FILE DESCRIPTION

  • Code Folder/FINAL CODE.py - The final code without for Visual Odometry

  • Code Folder/Built_in.py - The code made completely using Built-in functions

  • Code Folder/ReadCameraModel.py - Loads camera intrisics and undistortion LUT from disk

  • Code Folder/UndistortImage.py - Undistort an image using a lookup table

  • Code Folder/VIDEO.py - Used to display the 2 final plots - my code vs built-in

  • Dataset folder - Contains link to dataset. Should have 3 folders - Frames, SIFT images and model

  • Images folder - Contains images for github use (can be ignored)

  • Output folder - Contains output videos and 2 output csv files

    • points_final.csv - This is the output points from the Built_in.py
    • updated2_final.csv - This is the output points from the FINAL_CODE.py
  • References folder - Contains supplementary documents that aid in understanding

  • Report folder - Contains Project Report

RUN INSTRUCTIONS

  • Make sure all dependancies are met

  • Ensure the location of the input video files are correct in the code you're running

  • Comment/Uncomment as reqd

  • RUN Final_CODE.py for my code of Visual Odometry to generate the a new first csv file

  • RUN Built_in.py for code made using completely Built-in functions to generate a new second csv file

  • RUN VIDEO.py to use original csv files to display output

About

Computer Vision - Perception May'19

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages