Skip to content

Classical computer vision approach, bird’s eye view implementation for lane detection

Notifications You must be signed in to change notification settings

hurtadosanti/AdvancedLaneDetection

Repository files navigation

Advanced Lane Detection

Udacity - Self-Driving Car NanoDegree

This Article gives and overview of the project.

result

The goals of this project are the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a threshold binary image.
  • Apply a perspective transform to rectify binary image ("birds-eye view").
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

You can find a thorough description here on the detail description.

Environment

The environment was created using miniconda, using the following configurations, environment.yml.

Libraries

The libraries used on this project are:

  • numpy
  • matplotlib
  • opencv
  • jupyter

References

https://github.com/udacity/CarND-Advanced-Lane-Lines

License

MIT License Copyright (c) 2016-2018 Udacity, Inc.

About

Classical computer vision approach, bird’s eye view implementation for lane detection

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published