Created by Jisheng Yang, Zijun Huang, Maochun Huang, Xianxina Zeng, Dong Li, Yun Zhang
This repository is code release of out PRCV 2019 paper (here). In this work, we propose a deep learning based method to segment power line corridor LiDAR point cloud. We design an effective channel presentation for LiDAR point cloud and adapt a point cloud segmentation network (pointnet) as our basic network. To verify the generalization ability of our channel presentation, we also do experiments on Kitti dataset. Experiments show that our channel presentation not only works well on Power Line Corridor LiDAR Point Cloud dataset, but also generalizes well on Kitti dataset. Our training codes are mainly adapted from (pointnet)
Since Kitti doesn't provide point-wise semantic labels, we obtain semantic labels with methods discribed in squeezeseg, assigning same labels to points within a 3D bounding box.
For more details of our method, please refer to our paper.
If you find our work useful in your research, please consider citing:
@ARTICLE{power line segmentation,
author={Jisheng Yang, Zijun Huang, Maochun Huang, Xianxina Zeng, Dong Li, Yun Zhang},
title = {Power Line Corridor LiDAR Point Cloud Segmentation Using Convolutional Neural Network},
journal={PRCV 2019},
year={2019}
}
Codes of this release is implement with python3.6. Please install numpy==1.16, tensorflow==1.14.
Power Line Corridor LiDAR Point Cloud dataset is classified, so only experiments on Kitti dataset is discribed. But codes on Power Line Corridor LiDAR Point Cloud dataset is also release.
Download Kitti 3D object(here), put them in /data
, unzip them and organize the folders as follows:
data/kitti/
data_object_velodyne/
label2/
Run calculate_3d_bbox_corners.py
to extract point clouds from kitti and label them, you can use CC to visulize in /data/labeled_point_cloud/
. Run nine2four.py
to delete some labels, since we only consider there class: Car, Pedestrian, Cyclist. You can also visulize in /data/input_point_cloud_dir.
Run data_process_base_kitti.py
and /sem_seg/seg_codes/train_base.py
in sequence.
Run /sem_seg/seg_codes/batch_inference_base.py
. You can visualize results in /log_base/dump/. And then open /sem_seg/seg_codes/statistics_mul.py
and make sure results_dir = base_dir + '/../log_base/dump'
and run it to see IoU, precision and recall.
Run data_process_local_kitti.py
and /sem_seg/seg_codes/train_local.py
in sequence.
Run /sem_seg/seg_codes/batch_inference_local.py
. You can visualize results in /log_local/dump/. And then open /sem_seg/seg_codes/statistics_mul.py
and make sure results_dir = base_dir + '/../log_local/dump'
and run it to see IoU, precision and recall.
We found channel presentation works better in Kitti dataset. I means the original intensity. You can try the following operation to see the results.
Run data_process_local_kitti_v2.py
and /sem_seg/seg_codes/train_local_v2.py
in sequence.
Run /sem_seg/seg_codes/batch_inference_local_v2.py
. You can visualize results in /log_local_v2/dump/. And then open /sem_seg/seg_codes/statistics_mul.py
and make sure results_dir = base_dir + '/../log_local_v2/dump'
and run it to see IoU, precision and recall.
The results are different from those in paper and they are better. Because we didn't have time to train enough epoches before submitting the paper. Table 1 shows the IOU result of 20 epoches.
Table 1
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
SqueezeSeg