Skip to content

Commit

Permalink
[Benchmark] Add PV RCNN benchmark (open-mmlab#2045)
Browse files Browse the repository at this point in the history
* fix a bug

* fix a batch inference bug

* fix docs

* add pvrcnn benchmark

* fix

* add link

* add

* fix lint
  • Loading branch information
VVsssssk authored and ZwwWayne committed Dec 3, 2022
1 parent c543b48 commit cb7c679
Show file tree
Hide file tree
Showing 6 changed files with 78 additions and 1 deletion.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
<li><a href="configs/point_rcnn">PointRCNN (CVPR'2019)</a></li>
<li><a href="configs/parta2">Part-A2 (TPAMI'2020)</a></li>
<li><a href="configs/centerpoint">CenterPoint (CVPR'2021)</a></li>
<li><a href="configs/pv_rcnn">PV-RCNN (CVPR'2020)</a></li>
</ul>
<li><b>Indoor</b></li>
<ul>
Expand Down Expand Up @@ -227,6 +228,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
| MonoFlex |||||||||||
| SA-SSD |||||||||||
| FCAF3D |||||||||||
| PV-RCNN |||||||||||

**Note:** All the about **300+ models, methods of 40+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase.

Expand Down
42 changes: 42 additions & 0 deletions configs/pv_rcnn/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection

> [PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection](https://arxiv.org/abs/1912.13192)
<!-- [ALGORITHM] -->

## Introduction

3D object detection has been receiving increasing attention from both industry and academia thanks to its wide applications in various fields such as autonomous driving and robotics. LiDAR sensors are widely adopted in autonomous driving vehicles and robots for capturing 3D scene information as sparse and irregular point clouds, which provide vital cues for 3D scene perception and understanding. In this paper, we propose to achieve high performance 3D object detection by designing novel point-voxel integrated networks to learn better 3D features from irregular point clouds.

<div align=center>
<img src="https://user-images.githubusercontent.com/88368822/202114244-ccf52f56-b8c9-4f1b-9cc2-80c7a9952c99.png" width="800"/>
</div>

## Results and models

### KITTI

| Backbone | Class | Lr schd | Mem (GB) | Inf time (fps) | mAP | Download |
| :---------------------------------------------: | :-----: | :--------: | :------: | :------------: | :---: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [SECFPN](./pv_rcnn_8xb2-80e_kitti-3d-3class.py) | 3 Class | cyclic 80e | 5.4 | | 72.28 | [model](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class/pv_rcnn_8xb2-80e_kitti-3d-3class_20221117_234428-b384d22f.pth) \\ [log](https://download.openmmlab.com/mmdetection3d/v1.1.0_models/pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class/pv_rcnn_8xb2-80e_kitti-3d-3class_20221117_234428.json) |

Note: mAP represents AP11 results on 3 Class under the moderate setting.

Detailed performance on KITTI 3D detection (3D) is as follows, evaluated by AP11 metric:

| | Easy | Moderate | Hard |
| ---------- | :---: | :------: | :---: |
| Car | 89.20 | 83.72 | 78.79 |
| Pedestrian | 66.64 | 59.84 | 55.33 |
| Cyclist | 87.25 | 73.27 | 69.61 |

## Citation

```latex
@article{ShaoshuaiShi2020PVRCNNPF,
title={PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection},
author={Shaoshuai Shi and Chaoxu Guo and Li Jiang and Zhe Wang and Jianping Shi and Xiaogang Wang and Hongsheng Li},
journal={computer vision and pattern recognition},
year={2020}
}
```
29 changes: 29 additions & 0 deletions configs/pv_rcnn/metafile.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
Collections:
- Name: PV-RCNN
Metadata:
Training Data: KITTI
Training Techniques:
- AdamW
Training Resources: 8x A100 GPUs
Architecture:
- Feature Pyramid Network
Paper:
URL: https://arxiv.org/abs/1912.13192
Title: 'PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection'
README: configs/pv_rcnn/README.md
Code:
URL: https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/models/detectors/pv_rcnn.py#L12
Version: v1.1.0rc2

Models:
- Name: pv_rcnn_8xb2-80e_kitti-3d-3class
In Collection: PV-RCNN
Config: configs/pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class.py
Metadata:
Training Memory (GB): 5.4
Results:
- Task: 3D Object Detection
Dataset: KITTI
Metrics:
mAP: 72.28
Weights: <https://download.openmmlab.com/mmdetection3d/v1.1.0_models/pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class/pv_rcnn_8xb2-80e_kitti-3d-3class_20221117_234428-b384d22f.pth
4 changes: 4 additions & 0 deletions docs/en/model_zoo.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,10 @@ Please refer to [SA-SSD](https://github.com/open-mmlab/mmdetection3d/blob/master

Please refer to [FCAF3D](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/fcaf3d) for details. We provide FCAF3D baselines on the ScanNet, S3DIS, and SUN RGB-D datasets.

### PV-RCNN

Please refer to [PV-RCNN](https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/pv_rcnn) for details. We provide PV-RCNN baselines on the KITTI dataset.

### Mixed Precision (FP16) Training

Please refer to [Mixed Precision (FP16) Training on PointPillars](https://github.com/open-mmlab/mmdetection3d/tree/v1.0.0.dev0/configs/pointpillars/hv_pointpillars_fpn_sbn-all_fp16_2x8_2x_nus-3d.py) for details.
2 changes: 1 addition & 1 deletion tests/test_models/test_detectors/test_pvrcnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def test_pvrcnn(self):
DefaultScope.get_instance('test_pvrcnn', scope_name='mmdet3d')
setup_seed(0)
pvrcnn_cfg = get_detector_cfg(
'pvrcnn/pvrcnn_8xb2-80e_kitti-3d-3class.py')
'pv_rcnn/pv_rcnn_8xb2-80e_kitti-3d-3class.py')
model = MODELS.build(pvrcnn_cfg)
num_gt_instance = 2
packed_inputs = create_detector_inputs(num_gt_instance=num_gt_instance)
Expand Down

0 comments on commit cb7c679

Please sign in to comment.