This repository contains all the necessary code, documentation, and additional resources for the "Multi-Sensor Fusion for Enhanced Navigation" project. The project aims to improve robotic navigation capabilities by integrating data from multiple sensors to overcome the limitations of 2D LiDAR systems.
Find the comprehensive project report in PDF format here.
Access the LaTeX source files used to generate the project report here.
This section includes all the source code developed for the project, organized into specific modules:
Code for converting depth images to laser scan data, enhancing obstacle detection in 2D navigation systems. More info
Implementation of the robotic control algorithms including navigation and sensor integration. More info
Core algorithms for the fusion of multiple sensor data aimed at providing accurate real-time localization and mapping. More info
Documentation of testing procedures, results, and how they validate the effectiveness of the proposed solutions.
With 2D LiDAR only:
With the fusion algorithm:
Barricades world
With 2D LiDAR only:
With the fusion algorithm:
Outline of potential future extensions and improvements to the project, based on current outcomes and technological advancements.
- Test on real robots.
- Use multiple rows in depth image rather than one row.
- Use multiple cameras to achieve 360 degree detection.
Add setup guidance for this project
For more information or inquiries, please contact Xinyang Huang.