Skip to content

ROS nodes for visualization, calibration, car and passenger detection for Didi autonomous driving competition.

Notifications You must be signed in to change notification settings

omgteam/Didi-competition-solution

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 

Repository files navigation

DiDi competition

Here you can find more info about the challenge.

This repository is to provide visualization, calibration, detection ROS nodes.

Instructions:

Install ROS:

Follow instructions you find in this page.

Download dataset:

  • Download the 22GB dataset from here.

Setup:

$ git clone https://github.com/omgteam/Didi-competition-solution.git

$ cd Didi-competition-solution

$ catkin_make

$ source devel/setup.bash

Visualization (Done)

To visualize Dataset Release 2, we need to convert topic /velodyne_packets to /velodyne_points:

First install ROS velodyne drivers in https://github.com/ros-drivers/velodyne.git, then:

$ roslaunch velodyne_pointcloud 32e_points.launch

$ roslaunch didi_visualize display_rosbag_rviz.launch rosbag_file:=PATH/NAME.bag

This module is borrowed from https://github.com/jokla/didi_challenge_ros.

Object detection (Doing)

To detect cars, we use lidar and radar sensor info to generate proposals, then project into 2D image and classify target type(car, pedestrian, cyclist, background) and regress to targets. This kind of solution can handle object detection within range of 170 meters.

Proposal type: focus point (x,y,z); 3D proposal (x,y,z,w,l,h).

Projection function: 2D box with focus point at center (box's height and width is function of distance of focus point); Projection of 3D proposal (x,y,z,w,l,h).

Classifier and Regressor: CNN classification and regressor.

How to use history detection info to avoid redundant detection and boost detection accuracy. To avoid redundant detection, recognize stationary obstacles. To boost detection accuracy, we can use trajectory smooth techiniques presented in [1].

In src/didi_proposal, we try some algorithms:

Alorithm 1

Use radar points to do region proposal, then projects into image to classify. Calibration code is done, and waiting for calibration file to check correctness.

Algorithm 2

Feed bird-view of lidar, radar image and camera image into CNN, and classy if there exists cars in camera, and regress to relative position with capture car at center. This method can only apply in one moving car situation.

Reference

[1] Jiang, Chunhui, et al. "A trajectory-based approach for object detection from video." Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016.

About

ROS nodes for visualization, calibration, car and passenger detection for Didi autonomous driving competition.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages