Skip to content

nout-kleef/mmdetection3d

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fork instructions

This fork contains the code used to produce the results in my undergraduate dissertation: Point-based 3D Object Detection for Autonomous Vehicles: On the Performance of LiDAR and 4D mmWave Radar.

Dataset conversion

important The following converted datasets already exist [1]:

  • LiDAR, w/ intensity: /mnt/12T/nout/V3/inhouse_unfiltered/kitti_format/
  • LiDAR, w/o intensity: /mnt/12T/nout/inhouse_filtered/kitti_format/
  • radar, xyz: /mnt/12T/nout/V3/inhouse_unfiltered_radar/kitti_format/
  • radar, xy: /mnt/12T/nout/V3/inhouse_unfiltered_radar_bev/kitti_format/

Converting SAIC Motor data into KITTI format consists of two steps, where A, B, C are paths to directories.

  1. Raw data --> SAIC Motor format: python preprocess/preprocess.py A B --skip_label_filter. See scripts/preprocess.sh for an example.
  2. SAIC Motor format --> KITTI format: python tools/create_data.py inhouse --root-path B --out-dir C --workers 48 --extra-tag inhouse. See scripts/create_data.sh for an example.

NB1: step 1 also creates a train/val/test split in B/kitti_format/ImageSets. Hence, step 2 will only succeed if a) B == C OR b) the split files are copied to C/kitti_format/ImageSets.

NB2: step 2 produces ground-truth samples from either LiDAR or radar data. In order to use radar data for these samples, --version radar must be added (LiDAR is the default).

Training

SAIC Motor models can be trained using the following command, where X is the config file, and Y is a new directory to store the checkpoints and log files. python tools/train.py X --work-dir Y. See scripts/train_lidar.sh for an example.

For each of the experiments listed above, a config file exists:

  • LiDAR, w/ intensity: archive/gpu1/intensity/inhouse_lidar.py
  • LiDAR, w/o intensity: archive/V2/lidar/inhouse_lidar.py
  • radar, xyz: archive/V3/radar_unfiltered/inhouse_radar_uf.py
  • radar, xy: archive/V3/radar_unfiltered_bev/inhouse_radar_bev.py

Evaluation

important Quantitative results in the report come from evaluating the models on the test set. The qualitative results are scenes from the validation set. For this, reason, the config files must be altered slightly when switching between these two actions:

For config file X (see above):

  1. search X for test=dict, which gives one match around lines 340-350.
  2. Three lines below, replace the path for ann_file: IF qualitative: use inhouse_infos_val.pkl, ELSE: use inhouse_infos_test.pkl (default).

Quantitative

Run the test script, where X is the config file, Y is a checkpoint file (use epoch_120.pth in the same directory): python tools/test.py X Y --eval inhouse.

Qualitative

After making the change described above, run the test script with visualisation enabled, where Z is the output directory for visualisation files: python tools/test.py X Y --eval inhouse --eval-options show=True out_dir=Z. Specifically, qualitative analyses shown in the report were produced using "LiDAR, w/o intensity" and "radar, xyz", on scenes '1643180730900', '1643183307400', '1643182519400', '1643180515600'.

NB3: show=True will not work when run headless. I used TeamViewer for this.

Plots

The plots shown in the report were produced using scripts/produce_plots.py.

Original MMDetection3D instructions

 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 

docs badge codecov license

News: We released the codebase v0.18.0.

In addition, we have preliminarily supported several new models on the v1.0.0.dev0 branch, including DGCNN, SMOKE and PGD.

Note: We are going through large refactoring to provide simpler and more unified usage of many modules. Thus, few features will be added to the master branch in the following months.

The compatibilities of models are broken due to the unification and simplification of coordinate systems. For now, most models are benchmarked with similar performance, though few models are still being benchmarked.

You can start experiments with v1.0.0.dev0 if you are interested. Please note that our new features will only be supported in v1.0.0 branch afterward.

In the nuScenes 3D detection challenge of the 5th AI Driving Olympics in NeurIPS 2020, we obtained the best PKL award and the second runner-up by multi-modality entry, and the best vision-only results.

Code and models for the best vision-only method, FCOS3D, have been released. Please stay tuned for MoCa.

Documentation: https://mmdetection3d.readthedocs.io/

Introduction

English | 简体中文

The master branch works with PyTorch 1.3+.

MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by MMLab.

demo image

Major features

  • Support multi-modality/single-modality detectors out of box

    It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.

  • Support indoor/outdoor 3D detection out of box

    It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support nuImages dataset.

  • Natural integration with 2D detection

    All the about 300+ models, methods of 40+ papers, and modules supported in MMDetection can be trained or used in this codebase.

  • High efficiency

    It trains faster than other codebases. The main results are as below. Details can be found in benchmark.md. We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by ×.

    Methods MMDetection3D OpenPCDet votenet Det3D
    VoteNet 358 × 77 ×
    PointPillars-car 141 × × 140
    PointPillars-3class 107 44 × ×
    SECOND 40 30 × ×
    Part-A2 17 14 × ×

Like MMDetection and MMCV, MMDetection3D can also be used as a library to support different projects on top of it.

License

This project is released under the Apache 2.0 license.

Changelog

v0.18.0 was released in 1/1/2022. Please refer to changelog.md for details and release history.

For branch v1.0.0.dev0, please refer to changelog_v1.0.md for our latest features and more details.

Benchmark and model zoo

Supported methods and backbones are shown in the below table. Results and models are available in the model zoo.

Support backbones:

  • PointNet (CVPR'2017)
  • PointNet++ (NeurIPS'2017)
  • RegNet (CVPR'2020)

Support methods

ResNet ResNeXt SENet PointNet++ HRNet RegNetX Res2Net
SECOND
PointPillars
FreeAnchor
VoteNet
H3DNet
3DSSD
Part-A2
MVXNet
CenterPoint
SSN
ImVoteNet
FCOS3D
PointNet++
Group-Free-3D
ImVoxelNet
PAConv

Other features

Note: All the about 300+ models, methods of 40+ papers in 2D detection supported by MMDetection can be trained or used in this codebase.

Installation

Please refer to getting_started.md for installation.

Get Started

Please see getting_started.md for the basic usage of MMDetection3D. We provide guidance for quick run with existing dataset and with customized dataset for beginners. There are also tutorials for learning configuration systems, adding new dataset, designing data pipeline, customizing models, customizing runtime settings and Waymo dataset.

Please refer to FAQ for frequently asked questions. When updating the version of MMDetection3D, please also check the compatibility doc to be aware of the BC-breaking updates introduced in each version.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmdet3d2020,
    title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},
    author={MMDetection3D Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
    year={2020}
}

Contributing

We appreciate all contributions to improve MMDetection3D. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM Installs OpenMMLab Packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMOCR: OpenMMLab text detection, recognition and understanding toolbox.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
  • MMRazor: OpenMMLab Model Compression Toolbox and Benchmark.
  • MMDeploy: OpenMMLab Model Deployment Framework.

About

OpenMMLab's next-generation platform for general 3D object detection.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 89.5%
  • C++ 5.7%
  • Cuda 3.8%
  • Other 1.0%