curb detection short result demo: https://github.com/STWin1/curb/blob/main/result/result.gif

our paper will be update:TODO.
NRS-Dataset:
Overall:
The dataset include: 40000+ frames pointclouds and 6000+ labels.
Our NRS-Datasets can be download at: https://pan.baidu.com/s/1OoPrI-5ltRns0Omrp-BIsw?pwd=3udc extract coder: 3udc
Google Driver: TODO
Details:
The NRS-dataset contains around 6 thousand point cloud frames labeled. The ratio of day-scenario and night-scenario is 3.5 : 2.5. The data is gathered by RS-Ruby {https://www.robosense.ai/en/rslidar/RS-Ruby} from both city roads in Shenyang and campus roads at the company
NEUSOFT REACHAUTO {https://www.reachauto.com/en/}.
The origin point cloud with 46000+ points at each frame are collected by our autonomous driving car. And the front-view camera with space-time synchronization with our LiDAR will synchronously capture the corresponding scene semantic information. We set the region of interest (ROI) to
[-22.4m, 22.4m] in
This dataset is labeled for 3D curb detection. For example, figure 1 is an origin pointcloud.

Then, we transfomer the pointcloud to BEV presention, as figure 2.
Next, we extract curb information and build the ground truth from curb information in the BEV representations. Here, we use the labelme tool to label curb line.
If you have any question, please contact: jgaomai@mail.ustc.edu.cn

