This is a custom auto-ground-truthing stack utilizing deep learning (segmentation), visual odometry, sensor data and Kalman filter to generate a path and road edges It uses data from Carla autonomous driving simulator
- Gyroscope/IMU
- GPS/GNSS (the 2 preceding sensors are not needed for simulation data, but for real world)
- camera => visual odometry
- steering wheel angle sensor
- Laika for GNSS
- Rednose for Kalman
- SegNet for segmentation
- optimize semantic segmentation
- extract labels from segmented images using digital image processing
- detect poses instead of points (Rt matrix) (optional)
- implement visual SLAM
- implement sensor fusion using Kalman Filters