Skip to content

Commit

Permalink
fix markdown linting errors (#468)
Browse files Browse the repository at this point in the history
  • Loading branch information
Wuziyi616 committed Apr 21, 2021
1 parent a588943 commit 6942cf9
Show file tree
Hide file tree
Showing 15 changed files with 33 additions and 4 deletions.
3 changes: 3 additions & 0 deletions .github/ISSUE_TEMPLATE/error-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,11 @@ A clear and concise description of what the bug is.

**Reproduction**
1. What command or script did you run?

```
A placeholder for the command.
```

2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?

Expand All @@ -33,6 +35,7 @@ A placeholder for the command.

**Error traceback**
If applicable, paste the error trackback here.

```
A placeholder for trackback.
```
Expand Down
5 changes: 5 additions & 0 deletions .github/ISSUE_TEMPLATE/reimplementation_questions.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,17 @@ A clear and concise description of what the problem you meet and what have you d

**Reproduction**
1. What command or script did you run?

```
A placeholder for the command.
```

2. What config dir you run?

```
A placeholder for the config.
```

3. Did you make any modifications on the code or config? Did you understand what you have modified?
4. What dataset did you use?

Expand All @@ -50,6 +54,7 @@ A placeholder for the config.
**Results**

If applicable, paste the related results here, e.g., what you expect and what you get.

```
A placeholder for results comparison
```
Expand Down
2 changes: 2 additions & 0 deletions configs/3dssd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ We implement 3DSSD and provide the results and checkpoints on KITTI datasets.
```

### Experiment details on KITTI datasets

Some settings in our implementation are different from the [official implementation](https://github.com/Jia-Research-Lab/3DSSD), which bring marginal differences to the performance on KITTI datasets in our experiments. To simplify and unify the models of our implementation, we skip them in our models. These differences are listed as below:
1. We keep the scenes without any object while the official code skips these scenes in training. In the official implementation, only 3229 and 3394 samples are used as training and validation sets, respectively. In our implementation, we keep using 3712 and 3769 samples as training and validation sets, respectively, as those used for all the other models in our implementation on KITTI datasets.
2. We do not modify the decay of `batch normalization` during training.
Expand All @@ -25,6 +26,7 @@ Some settings in our implementation are different from the [official implementat
## Results

### KITTI

| Backbone |Class| Lr schd | Mem (GB) | Inf time (fps) | mAP |Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: |
| [PointNet2SAMSG](./3dssd_kitti-3d-car.py)| Car |72e|4.7||78.39(81.00)<sup>1</sup>|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/3dssd/3dssd_kitti-3d-car_20210324_122002-07e9a19b.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/3dssd/3dssd_kitti-3d-car_20210324_122002.log.json)|
Expand Down
1 change: 1 addition & 0 deletions configs/centerpoint/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ We follow the below style to name config files. Contributors are advised to foll
`{schedule}`: training schedule, options are 1x, 2x, 20e, etc. 1x and 2x means 12 epochs and 24 epochs respectively. 20e is adopted in cascade models, which denotes 20 epochs. For 1x/2x, initial learning rate decays by a factor of 10 at the 8/16th and 11/22th epochs. For 20e, initial learning rate decays by a factor of 10 at the 16th and 19th epochs.

`{dataset}`: dataset like nus-3d, kitti-3d, lyft-3d, scannet-3d, sunrgbd-3d. We also indicate the number of classes we are using if there exist multiple settings, e.g., kitti-3d-3class and kitti-3d-car means training on KITTI dataset with 3 classes and single class, respectively.

```
@article{yin2021center,
title={Center-based 3D Object Detection and Tracking},
Expand Down
1 change: 1 addition & 0 deletions configs/dynamic_voxelization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
[ALGORITHM]

We implement Dynamic Voxelization proposed in and provide its results and models on KITTI dataset.

```
@article{zhou2019endtoend,
title={End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds},
Expand Down
2 changes: 2 additions & 0 deletions configs/fp16/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,14 @@ Mixed precision training for PointNet-based methods will be supported in the fut
## Results

### SECOND on KITTI dataset

| Backbone |Class| Lr schd | FP32 Mem (GB) | FP16 Mem (GB) | FP32 mAP | FP16 mAP |Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: | :------: |
| [SECFPN](./hv_second_secfpn_fp16_6x8_80e_kitti-3d-car.py)| Car |cyclic 80e|5.4|2.9|79.07|78.72|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301-1f5ad833.pth)&#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car/hv_second_secfpn_fp16_6x8_80e_kitti-3d-car_20200924_211301.log.json)|
| [SECFPN](./hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py)| 3 Class |cyclic 80e|5.4|2.9|64.41|67.4|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059-05f67bdf.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059.log.json)|

### PointPillars on nuScenes dataset

| Backbone | Lr schd | FP32 Mem (GB) | FP16 Mem (GB) | FP32 mAP | FP32 NDS| FP16 mAP | FP16 NDS| Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :----: |:----: | :------: |
|[SECFPN](./hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d.py)|2x|16.4|8.37|35.17|49.7|35.19|50.27|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626-c3f0483e.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/fp16/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d/hv_pointpillars_secfpn_sbn-all_fp16_2x8_2x_nus-3d_20201020_222626.log.json)|
Expand Down
2 changes: 2 additions & 0 deletions configs/h3dnet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
[ALGORITHM]

We implement H3DNet and provide the result and checkpoints on ScanNet datasets.

```
@inproceedings{zhang2020h3dnet,
author = {Zhang, Zaiwei and Sun, Bo and Yang, Haitao and Huang, Qixing},
Expand All @@ -17,6 +18,7 @@ We implement H3DNet and provide the result and checkpoints on ScanNet datasets.
## Results

### ScanNet

| Backbone | Lr schd | Mem (GB) | Inf time (fps) | AP@0.25 |AP@0.5| Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: |
| [MultiBackbone](./h3dnet_3x8_scannet-3d-18class.py) | 3x |7.9||66.43|48.01|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_scannet-3d-18class_20200830_000136-02e36246.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/h3dnet/h3dnet_scannet-3d-18class/h3dnet_scannet-3d-18class_20200830_000136.log.json) |
1 change: 1 addition & 0 deletions configs/mvxnet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
[ALGORITHM]

We implement MVX-Net and provide its results and models on KITTI dataset.

```
@inproceedings{sindagi2019mvx,
title={MVX-Net: Multimodal voxelnet for 3D object detection},
Expand Down
2 changes: 1 addition & 1 deletion configs/parta2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ We implement Part-A^2 and provide its results and checkpoints on KITTI dataset.
year={2020},
publisher={IEEE}
}
```

## Results

### KITTI
Expand Down
4 changes: 3 additions & 1 deletion configs/second/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
[ALGORITHM]

We implement SECOND and provide the results and checkpoints on KITTI dataset.

```
@article{yan2018second,
title={Second: Sparsely embedded convolutional detection},
Expand All @@ -13,11 +14,12 @@ We implement SECOND and provide the results and checkpoints on KITTI dataset.
year={2018},
publisher={Multidisciplinary Digital Publishing Institute}
}
```

## Results

### KITTI

| Backbone |Class| Lr schd | Mem (GB) | Inf time (fps) | mAP |Download |
| :---------: | :-----: | :------: | :------------: | :----: |:----: | :------: |
| [SECFPN](./hv_second_secfpn_6x8_80e_kitti-3d-car.py)| Car |cyclic 80e|5.4||79.07|[model](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth) &#124; [log](https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238.log.json)|
Expand Down
1 change: 0 additions & 1 deletion configs/ssn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ We implement PointPillars with Shape-aware grouping heads used in the SSN and pr
booktitle={Proceedings of the European Conference on Computer Vision},
year={2020}
}
```

## Results
Expand Down
4 changes: 4 additions & 0 deletions data/scannet/README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,27 @@
### Prepare ScanNet Data for Indoor Detection or Segmentation Task

We follow the procedure in [votenet](https://github.com/facebookresearch/votenet/).

1. Download ScanNet v2 data [HERE](https://github.com/ScanNet/ScanNet). Link or move the 'scans' folder to this level of directory. If you are performing segmentation tasks and want to upload the results to its official [benchmark](http://kaldir.vc.in.tum.de/scannet_benchmark/), please also link or move the 'scans_test' folder to this directory.

2. In this directory, extract point clouds and annotations by running `python batch_load_scannet_data.py`. Add the `--max_num_point 50000` flag if you only use the ScanNet data for the detection task. It will downsample the scenes to less points.

3. Enter the project root directory, generate training data by running

```bash
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannet
```

The overall process could be achieved through the following script

```bash
python batch_load_scannet_data.py
cd ../..
python tools/create_data.py scannet --root-path ./data/scannet --out-dir ./data/scannet --extra-tag scannet
```

The directory structure after pre-processing should be as below

```
scannet
├── scannet_utils.py
Expand Down
5 changes: 5 additions & 0 deletions data/sunrgbd/README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,19 @@
### Prepare SUN RGB-D Data

We follow the procedure in [votenet](https://github.com/facebookresearch/votenet/).

1. Download SUNRGBD v2 data [HERE](http://rgbd.cs.princeton.edu/data/). Then, move SUNRGBD.zip, SUNRGBDMeta2DBB_v2.mat, SUNRGBDMeta3DBB_v2.mat and SUNRGBDtoolbox.zip to the OFFICIAL_SUNRGBD folder, unzip the zip files.

2. Enter the `matlab` folder, Extract point clouds and annotations by running `extract_split.m`, `extract_rgbd_data_v2.m` and `extract_rgbd_data_v1.m`.

3. Enter the project root directory, Generate training data by running

```bash
python tools/create_data.py sunrgbd --root-path ./data/sunrgbd --out-dir ./data/sunrgbd --extra-tag sunrgbd
```

The overall process could be achieved through the following script

```bash
cd matlab
matlab -nosplash -nodesktop -r 'extract_split;quit;'
Expand All @@ -23,11 +26,13 @@ python tools/create_data.py sunrgbd --root-path ./data/sunrgbd --out-dir ./data
NOTE: SUNRGBDtoolbox.zip should have MD5 hash `18d22e1761d36352f37232cba102f91f` (you can check the hash with `md5 SUNRGBDtoolbox.zip` on Mac OS or `md5sum SUNRGBDtoolbox.zip` on Linux)

NOTE: If you would like to play around with [ImVoteNet](../../configs/imvotenet/README.md), the image data (`./data/sunrgbd/sunrgbd_trainval/image`) are required. If you pre-processed the data before mmdet3d version 0.12.0, please pre-process the data again due to some updates in data pre-processing

```bash
python tools/create_data.py sunrgbd --root-path ./data/sunrgbd --out-dir ./data/sunrgbd --extra-tag sunrgbd
```

The directory structure after pre-processing should be as below

```
sunrgbd
├── README.md
Expand Down
1 change: 1 addition & 0 deletions docs/1_exist_data_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ All outputs (log files and checkpoints) will be saved to the working directory,
which is specified by `work_dir` in the config file.

By default we evaluate the model on the validation set after each epoch, you can change the evaluation interval by adding the interval argument in the training config.

```python
evaluation = dict(interval=12) # This evaluate the model per 12 epoch.
```
Expand Down
3 changes: 2 additions & 1 deletion docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ you can use more CUDA versions such as 9.0.

`e.g.` The pre-build *mmcv-full* could be installed by running: (available versions could be found [here](https://mmcv.readthedocs.io/en/latest/#install-with-pip))

```shell
```shell
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html
```

Expand Down Expand Up @@ -253,6 +253,7 @@ More demos about single/multi-modality and indoor/outdoor 3D detection can be fo
## High-level APIs for testing point clouds

### Synchronous interface

Here is an example of building the model and test given point clouds.

```python
Expand Down

0 comments on commit 6942cf9

Please sign in to comment.