[ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
-
Updated
Jul 31, 2024 - Python
[ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
[PAMI'23] TransFuser: Imitation with Transformer-Based Sensor Fusion for Autonomous Driving; [CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
alfred-py: A deep learning utility library for **human**, more detail about the usage of lib to: https://zhuanlan.zhihu.com/p/341446046
Predict dense depth maps from sparse and noisy LiDAR frames guided by RGB images. (Ranked 1st place on KITTI) [MVA 2019]
Official code for "EagerMOT: 3D Multi-Object Tracking via Sensor Fusion" [ICRA 2021]
ROS package for the Perception (Sensor Processing, Detection, Tracking and Evaluation) of the KITTI Vision Benchmark Suite
Vehicle State Estimation using Error-State Extended Kalman Filter
Deep learning approach for estimation of Remaining Useful Life (RUL) of an engine
Unscented Kalman Filtering on (Parallelizable) Manifolds (UKF-M)
State Estimation and Localization of an autonomous vehicle based on IMU (high rate), GNSS (GPS) and Lidar data with sensor fusion techniques using the Extended Kalman Filter (EKF).
Tensorflow and PyTorch implementation of Unsupervised Depth Completion from Visual Inertial Odometry (in RA-L January 2020 & ICRA 2020)
Code for "CMRNet: Camera to LiDAR-Map Registration" (ITSC 2019)
Cooperative Driving Dataset: a dataset for multi-agent driving scenarios
PyTorch Implementation of Unsupervised Depth Completion with Calibrated Backprojection Layers (ORAL, ICCV 2021)
A differential drive robot is controlled using ROS2 Humble running on a Raspberry Pi 4 (running Ubuntu server 22.04). The vehicle is equipped with a raspberry pi camera for visual feedback and an RPlidar A1 sensor used for Simultaneous Localization and Mapping (SLAM), autonomous navigation and obstacle avoidance.
Code for 'RRPN: Radar Region Proposal Network for Object Detection in Autonomous Vehicles' (ICIP 2019)
MPU6050/9250 I2C and SPI interface. Sensor fusion using a complementary filter yields sensor Euler angles and is implemented in five different languages.
Software for guidance, navigation and control for the Vortex AUVs. Purpose built for competing in AUV/ROV competitions.
Object Detection on Radar sensor and RGB camera images. https://ieeexplore.ieee.org/document/9191046 Full Thesis : RADAR+RGB Fusion for Robust Object Detection in Autonomous Vehicles. Zenodo. https://doi.org/10.5281/zenodo.13738235
Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"
Add a description, image, and links to the sensor-fusion topic page so that developers can more easily learn about it.
To associate your repository with the sensor-fusion topic, visit your repo's landing page and select "manage topics."