Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Features] Add support for Kitti semantic segmentation dataset #1602

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

AkideLiu
Copy link

@AkideLiu AkideLiu commented May 20, 2022

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Please describe the motivation of this PR and the goal you want to achieve through this PR.

Kitti semantic segmentation dataset is a lightweight dataset for semantic segmentation which shares the same label policy as cityscapes. It's an excellent starting point for segmentation and employs the weights pre-trained on cityscapes to perform transfer-learning, do you consider to support this dataset.

http://www.cvlibs.net/datasets/kitti/eval_semseg.php?benchmark=semantics2015

Modification

Please briefly describe what modification is made in this PR.

  1. add a converting tool to generalize the dataset format
  2. add customized dataset class in mmseg
  3. add config for the KITTI dataset

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

NO BC

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMDet3D.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

@CLAassistant
Copy link

CLAassistant commented May 20, 2022

CLA assistant check
All committers have signed the CLA.

@AkideLiu AkideLiu changed the title Add dataset support for kitti seg [Features] Add support for Kitti semantic segmentation dataset May 20, 2022
@AkideLiu
Copy link
Author

Per this issue discussed #1599

@MengzhangLI
Copy link
Contributor

Hi, @AkideLiu thanks for your nice PR. We would review it asap.

Please fix the lint error.

@codecov
Copy link

codecov bot commented May 20, 2022

Codecov Report

Attention: Patch coverage is 85.71429% with 1 line in your changes missing coverage. Please review.

Project coverage is 89.04%. Comparing base (0e37281) to head (1716a41).
Report is 74 commits behind head on master.

Files Patch % Lines
mmseg/datasets/kitti.py 83.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1602      +/-   ##
==========================================
- Coverage   89.04%   89.04%   -0.01%     
==========================================
  Files         144      145       +1     
  Lines        8636     8643       +7     
  Branches     1458     1459       +1     
==========================================
+ Hits         7690     7696       +6     
- Misses        706      707       +1     
  Partials      240      240              
Flag Coverage Δ
unittests 89.04% <85.71%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@MengzhangLI
Copy link
Contributor

Do you have related baseline or sota results on KITTI semantic segmentation dataset?

@AkideLiu
Copy link
Author

AkideLiu commented May 20, 2022

Do you have related baseline or sota results on KITTI semantic segmentation dataset?

Hi @MengzhangLI , I did not have a baseline or SOTA results on this dataset because the methods that have been used in some publications are not implemented in the mmseg.

However, this dataset could be directly evaluated by pre-trained models or training from scratch based on mmseg, and I have successfully performed training and evaluation by UNet on this dataset.

I could provide some example training configurations but I do not have the resources to perform distributed learning to obtain a pre-trained model.

more info refers to baseline : https://paperswithcode.com/sota/semantic-segmentation-on-kitti-semantic

@MengzhangLI
Copy link
Contributor

Do you have related baseline or sota results on KITTI semantic segmentation dataset?

Hi @MengzhangLI , I did not have a baseline or SOTA results on this dataset because the methods that have been used in some publications are not implemented in the mmseg.

However, this dataset could be directly evaluated by pre-trained models or training from scratch based on mmseg, and I have successfully performed training and evaluation by UNet on this dataset.

I could provide some example training configurations but I do not have the resources to perform distributed learning to obtain a pre-trained model.

more info refers to baseline : https://paperswithcode.com/sota/semantic-segmentation-on-kitti-semantic

OK, would you mind updating your training results (using mmseg) and attaching results from other repo/paper in this PR. We could polish up this PR together with training some semantic segmentation models on our side (use our own 4x or 8x V100 GPUs).

@AkideLiu
Copy link
Author

Hi, @MengzhangLI , I am planning to run around 1000 epochs on a single GPU for three famous networks, UNet, DeepLabV3+ and PSPnet as a baseline for this dataset. I will update my local test results once the experiment is finalized.

In this stage, I would append the UNet 1000 epochs results as follows:

+---------------+-------+-------+
|     Class     |  IoU  |  Acc  |
+---------------+-------+-------+
|      road     | 90.04 |  96.1 |
|    sidewalk   | 52.39 | 59.23 |
|    building   | 69.12 |  87.9 |
|      wall     | 33.87 | 42.57 |
|     fence     | 34.65 | 43.59 |
|      pole     | 51.69 | 64.28 |
| traffic light |  63.4 | 70.51 |
|  traffic sign |  42.8 | 45.97 |
|   vegetation  | 89.82 | 94.94 |
|    terrain    | 78.42 | 91.19 |
|      sky      | 95.64 | 98.02 |
|     person    |  7.94 |  9.48 |
|     rider     |  0.0  |  0.0  |
|      car      | 85.54 | 94.55 |
|     truck     |  9.93 | 28.45 |
|      bus      | 57.31 | 85.58 |
|     train     |  0.0  |  0.0  |
|   motorcycle  |  0.0  |  0.0  |
|    bicycle    |  2.94 |  3.29 |
+---------------+-------+-------+

+-------+-------+-------+
|  aAcc |  mIoU |  mAcc |
+-------+-------+-------+
| 90.25 | 45.55 | 53.46 |
+-------+-------+-------+

I will fix lint and update configs soon.

Could you please provide an email address where I could send the training logs?

Additionally, will you help to modify the training config according to multiple GPUs setups?

@MengzhangLI
Copy link
Contributor

Thanks. Training logs could be directly dropped in your replying blank, be like:

image

And I could train some models like DeepLabV3Plus, PSPNet and Swin Transformers models using MMSegmentation default settings. Their results should be better than yours (1 GPU UNet), let's keep in touch.

Best,

@AkideLiu
Copy link
Author

Looking forward to the training and evaluation results for different network architectures on your end.

The log of the previous UNet has been attached below.

Could you please help to fix the lint issues?

57d9719b0dd1756f994bada9889f2149.txt

@AkideLiu
Copy link
Author

I do not quite understand why the lint is failed...

isort....................................................................Failed
- hook id: isort
- files were modified by this hook

Fixing /home/runner/work/mmsegmentation/mmsegmentation/mmseg/datasets/__init__.py

yapf.....................................................................Failed
- hook id: yapf
- files were modified by this hook
Trim Trailing Whitespace.................................................Passed
Check Yaml...............................................................Passed
Fix End of Files.........................................................Failed
- hook id: end-of-file-fixer
- exit code: 1
- files were modified by this hook

Fixing docs/en/dataset_prepare.md

@MengzhangLI
Copy link
Contributor

I do not quite understand why the lint is failed...

isort....................................................................Failed
- hook id: isort
- files were modified by this hook

Fixing /home/runner/work/mmsegmentation/mmsegmentation/mmseg/datasets/__init__.py

yapf.....................................................................Failed
- hook id: yapf
- files were modified by this hook
Trim Trailing Whitespace.................................................Passed
Check Yaml...............................................................Passed
Fix End of Files.........................................................Failed
- hook id: end-of-file-fixer
- exit code: 1
- files were modified by this hook

Fixing docs/en/dataset_prepare.md

Seems like caused by unsuccessful installation about pre-commit. The error you showed usually caused by local code did not obey coding rule defined by pre-commit.

Try to follow: https://github.com/open-mmlab/mmsegmentation/blob/master/.github/CONTRIBUTING.md

If your OS is Ubuntu/Linux, the installation would be easy.

After successful installation, use pre-commit run --all-files command, and git add . to add those files.

@MeowZheng MeowZheng requested a review from xiexinch June 6, 2022 02:51

img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (864, 256)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @AkideLiu ,
Could you provide some references about the crop_size setting? This doesn't seem to be commonly used.

Copy link
Author

@AkideLiu AkideLiu Jun 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't have the reference for this crop size because this database is not very commonly used ... Just randomly selected values and these values are the multiple of 8. If you have any better suggestions, I am happy to change this crop size.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This project is top one on the paper with code for this dataset, there are same reference settings, the followed by commit modified this crop size : https://github.com/NVIDIA/semantic-segmentation/blob/7726b144c2cc0b8e09c67eabb78f027efdf3f0fa/train.py#L149-L150

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @AkideLiu
Thanks for your link, I took a quick look at this repo.
I found that, the crop_size should be set to 360 according to https://github.com/NVIDIA/semantic-segmentation/blob/2f548ab30ab0d56e91de66a4dea4757a0c64e7e4/scripts/train_kitti_WideResNet38.sh#L16.
And in their paper, the crop_size on KITTI is set to 368, you may check it at 4.1 Implementation Details section.

Besides, they test their model by slide mode and crop_szie is set to 368 , see test script.

Copy link
Author

@AkideLiu AkideLiu Jun 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @xiexinch I have changed the crop size to 368, and changed the test_cfg with the mode to slide.

The stride is calculated manually that stride = ceil(tile_size[0] * (1 - overlap)) -> ceil(368 * (1-1/3)) = 246,

overlap is not specified in the test script, therefore default one (1/3) has been used.

https://github.com/NVIDIA/semantic-segmentation/blob/2f548ab30ab0d56e91de66a4dea4757a0c64e7e4/eval.py#L50-L50

I am not sure about the implementation difference of sliding inference between mmseg and Nvidia project, if there are any remaining problems plz point them out.

https://github.com/NVIDIA/semantic-segmentation/blob/2f548ab30ab0d56e91de66a4dea4757a0c64e7e4/eval.py#L103

https://github.com/AkideLiu/mmsegmentation-Kitti/blob/0f7e95014b86d08e240e27f34349ab382aafe9a8/mmseg/models/segmentors/encoder_decoder.py#L155-L155

docs/en/dataset_prepare.md Outdated Show resolved Hide resolved
mmseg/datasets/kitti_seg.py Outdated Show resolved Hide resolved
mmseg/datasets/kitti_seg.py Outdated Show resolved Hide resolved
mmseg/datasets/kitti_seg.py Outdated Show resolved Hide resolved
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1232, 368),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The img_scale may set to be the largest scale.

img_scale=(2048, 1024), # Decides the largest scale for testing, used for the Resize pipeline

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

configs/unet/fcn_unet_s5-d16_4x4_256x864_160k_kitti.py Outdated Show resolved Hide resolved
@AkideLiu
Copy link
Author

AkideLiu commented Jun 6, 2022

@xiexinch a new commit response to the review, if you have any suggestions plz let me know

Copy link
Collaborator

@xiexinch xiexinch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @AkideLiu
I'm searching baselines on the KITTI dataset, since we'll do some experiments on it.
If you have any suggestions, welcome to let me know.

@@ -0,0 +1,54 @@
data_root = 'data/kitti-seg/'
dataset_type = 'KittiSegDataset'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
dataset_type = 'KittiSegDataset'
dataset_type = 'KittiDataset'

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May rename to kitti.py

Copy link
Author

@AkideLiu AkideLiu Jun 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @xiexinch, the name of the dataset includes seg due to the KITTI dataset containing lots of different categories, such as object detection, and depth estimation, using kittiseg can clarify this dataset is especially for segmentation.

See more on the official website: http://www.cvlibs.net/datasets/kitti/

Happy to change if you think KITTI instead of KITTISEG is more reasonable in this project

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we will not add 'depth/flow estimation' or 'detection' task in mmseg, I think there may be no misunderstanding about 'KITTI'. Besides, in mmflow and mmdet3d, both of they use 'KITTI' install of 'KITTI flow' or 'KITTI det3d', so 'KITTI' is good to understand.

ref:
https://github.com/open-mmlab/mmflow/blob/master/mmflow/datasets/kitti2015.py
https://github.com/open-mmlab/mmdetection3d/blob/master/mmdet3d/datasets/kitti_dataset.py

@@ -0,0 +1,8 @@
_base_ = [
'../_base_/models/deeplabv3plus_r50-d8.py',
'../_base_/datasets/kitti_seg.py', '../_base_/default_runtime.py',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
'../_base_/datasets/kitti_seg.py', '../_base_/default_runtime.py',
'../_base_/datasets/kitti.py', '../_base_/default_runtime.py',

@@ -0,0 +1,11 @@
_base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/kitti_seg.py',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/kitti_seg.py',
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/kitti.py',

@@ -0,0 +1,12 @@
# Copyright (c) OpenMMLab. All rights reserved.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May rename to kitti.py

@AkideLiu
Copy link
Author

AkideLiu commented Jun 8, 2022

Hi @AkideLiu I'm searching baselines on the KITTI dataset, since we'll do some experiments on it. If you have any suggestions, welcome to let me know.

I do also work on this dataset to find an optimal solution and one suggestion is to use transfer learning by pre-trained weights on cityscapes.

@xiexinch
Copy link
Collaborator

xiexinch commented Jun 8, 2022

Hi @AkideLiu I'm searching baselines on the KITTI dataset, since we'll do some experiments on it. If you have any suggestions, welcome to let me know.

I do also work on this dataset to find an optimal solution and one suggestion is to use transfer learning by pre-trained weights on cityscapes.

Hi @AkideLiu ,
We need some publication results as the baseline for this dataset. If you find some published papers, please do not hesitate to contact us.

@xiexinch
Copy link
Collaborator

@xiexinch
Copy link
Collaborator

Hi, @MengzhangLI , I am planning to run around 1000 epochs on a single GPU for three famous networks, UNet, DeepLabV3+ and PSPnet as a baseline for this dataset. I will update my local test results once the experiment is finalized.

In this stage, I would append the UNet 1000 epochs results as follows:

+---------------+-------+-------+
|     Class     |  IoU  |  Acc  |
+---------------+-------+-------+
|      road     | 90.04 |  96.1 |
|    sidewalk   | 52.39 | 59.23 |
|    building   | 69.12 |  87.9 |
|      wall     | 33.87 | 42.57 |
|     fence     | 34.65 | 43.59 |
|      pole     | 51.69 | 64.28 |
| traffic light |  63.4 | 70.51 |
|  traffic sign |  42.8 | 45.97 |
|   vegetation  | 89.82 | 94.94 |
|    terrain    | 78.42 | 91.19 |
|      sky      | 95.64 | 98.02 |
|     person    |  7.94 |  9.48 |
|     rider     |  0.0  |  0.0  |
|      car      | 85.54 | 94.55 |
|     truck     |  9.93 | 28.45 |
|      bus      | 57.31 | 85.58 |
|     train     |  0.0  |  0.0  |
|   motorcycle  |  0.0  |  0.0  |
|    bicycle    |  2.94 |  3.29 |
+---------------+-------+-------+

+-------+-------+-------+
|  aAcc |  mIoU |  mAcc |
+-------+-------+-------+
| 90.25 | 45.55 | 53.46 |
+-------+-------+-------+

I will fix lint and update configs soon.

Could you please provide an email address where I could send the training logs?

Additionally, will you help to modify the training config according to multiple GPUs setups?

Hi @AkideLiu,

Hi, @MengzhangLI , I am planning to run around 1000 epochs on a single GPU for three famous networks, UNet, DeepLabV3+ and PSPnet as a baseline for this dataset. I will update my local test results once the experiment is finalized.

In this stage, I would append the UNet 1000 epochs results as follows:

+---------------+-------+-------+
|     Class     |  IoU  |  Acc  |
+---------------+-------+-------+
|      road     | 90.04 |  96.1 |
|    sidewalk   | 52.39 | 59.23 |
|    building   | 69.12 |  87.9 |
|      wall     | 33.87 | 42.57 |
|     fence     | 34.65 | 43.59 |
|      pole     | 51.69 | 64.28 |
| traffic light |  63.4 | 70.51 |
|  traffic sign |  42.8 | 45.97 |
|   vegetation  | 89.82 | 94.94 |
|    terrain    | 78.42 | 91.19 |
|      sky      | 95.64 | 98.02 |
|     person    |  7.94 |  9.48 |
|     rider     |  0.0  |  0.0  |
|      car      | 85.54 | 94.55 |
|     truck     |  9.93 | 28.45 |
|      bus      | 57.31 | 85.58 |
|     train     |  0.0  |  0.0  |
|   motorcycle  |  0.0  |  0.0  |
|    bicycle    |  2.94 |  3.29 |
+---------------+-------+-------+

+-------+-------+-------+
|  aAcc |  mIoU |  mAcc |
+-------+-------+-------+
| 90.25 | 45.55 | 53.46 |
+-------+-------+-------+

I will fix lint and update configs soon.

Could you please provide an email address where I could send the training logs?

Additionally, will you help to modify the training config according to multiple GPUs setups?

Hi @AkideLiu,
I'd like to ask how your training is running, don't the ann of the dataset need to be converted to trainLabelIds first?

@AkideLiu
Copy link
Author

AkideLiu commented Jun 14, 2022

Hi @xiexinch , the conversion of the label ids is not required for this dataset. The official format can easily be adapted from the cityscape label policy and the rest of the configuration for the dataset is identical to cityscapes. A suggestion for a quick start that downloads the zipped data from the official website I provided in the PR description, unzip it and run the directories structure conversion scripts provided in this PR and modified the local directory to match the configuration files. Afterwards, you feel free to go and start the training.

@AkideLiu
Copy link
Author

Hi @MengzhangLI @AkideLiu How about using these results as the baseline? Ref MSeg: A Composite Dataset for Multi-domain Semantic Segmentation (CVPR 2020)

I have briefly gone through this paper, it's a quite good baseline as it has distinct performance report and the implementation is open-sourced for references

@xiexinch
Copy link
Collaborator

xiexinch commented Jun 14, 2022

Hi @xiexinch , the conversion of the label ids is not required for this dataset. The official format can easily be adapted from the cityscape label policy and the rest of the configuration for the dataset is identical to cityscapes. A suggestion for a quick start that downloads the zipped data from the official website I provided in the PR description, unzip it and runs the format conversion scripts provided in this PR and modified the local directory to match the configuration files. Afterwards, you feel free to go and start the training.

I know what you mean, but your conversion script is just splitting the dataset. I tried to start training, but the official annotation cannot be used directly for training, only if I convert the label id to train id.

@AkideLiu
Copy link
Author

Hi @xiexinch , the conversion of the label ids is not required for this dataset. The official format can easily be adapted from the cityscape label policy and the rest of the configuration for the dataset is identical to cityscapes. A suggestion for a quick start that downloads the zipped data from the official website I provided in the PR description, unzip it and runs the format conversion scripts provided in this PR and modified the local directory to match the configuration files. Afterwards, you feel free to go and start the training.

I know what you mean, but your conversion script is just splitting the dataset. I tried to start training, but the official annotation cannot be used directly for training, only if I convert the train id to label id.

I do not fully understand this problem, would you explain more about this case?

@xiexinch
Copy link
Collaborator

xiexinch commented Jun 14, 2022

Hi @xiexinch , the conversion of the label ids is not required for this dataset. The official format can easily be adapted from the cityscape label policy and the rest of the configuration for the dataset is identical to cityscapes. A suggestion for a quick start that downloads the zipped data from the official website I provided in the PR description, unzip it and runs the format conversion scripts provided in this PR and modified the local directory to match the configuration files. Afterwards, you feel free to go and start the training.

I know what you mean, but your conversion script is just splitting the dataset. I tried to start training, but the official annotation cannot be used directly for training, only if I convert the train id to label id.

I do not fully understand this problem, would you explain more about this case?

Image annotations provided by Cityscapes and KITTI are annotated by label id. In training, we must convert the label id to train id. You can read the code from cityscapesscripts.

https://github.com/mcordts/cityscapesScripts/blob/aeb7b82531f86185ce287705be28f452ba3ddbb8/cityscapesscripts/helpers/labels.py#L64

I'd like to know how your training ran if you didn't do this conversion?

@AkideLiu
Copy link
Author

Hi @xiexinch , the conversion of the label ids is not required for this dataset. The official format can easily be adapted from the cityscape label policy and the rest of the configuration for the dataset is identical to cityscapes. A suggestion for a quick start that downloads the zipped data from the official website I provided in the PR description, unzip it and runs the format conversion scripts provided in this PR and modified the local directory to match the configuration files. Afterwards, you feel free to go and start the training.

I know what you mean, but your conversion script is just splitting the dataset. I tried to start training, but the official annotation cannot be used directly for training, only if I convert the train id to label id.

I do not fully understand this problem, would you explain more about this case?

Image annotations provided by Cityscapes and KITTI are annotated by label id. In training, we must convert the label id to train id. You can read the code from cityscapesscripts.

https://github.com/mcordts/cityscapesScripts/blob/aeb7b82531f86185ce287705be28f452ba3ddbb8/cityscapesscripts/helpers/labels.py#L64

I'd like to know how your training ran if you didn't do this conversion?

Hi @xiexinch i will try to reproduce the training in a fresh environment and provide update soon.

@AkideLiu
Copy link
Author

AkideLiu commented Jul 3, 2022

Do apologise for the delay in the progress, previously I was taking a competition which highly similar to this dataset (subset).
Here are some experiments based on mmseg, https://github.com/UAws/CV-3315-Is-All-You-Need
At this stage, we have completed the competition and will focus on this PR to provide resolution for unfinished parts.

@AkideLiu
Copy link
Author

@xiexinch solution provided for converting labels, could you review this PR?

reference: https://github.com/navganti/kitti_scripts/blob/master/semantics/devkit/kitti_relabel.py

@xiexinch
Copy link
Collaborator

@xiexinch solution provided for converting labels, could you review this PR?

reference: https://github.com/navganti/kitti_scripts/blob/master/semantics/devkit/kitti_relabel.py

Thanks for updating this PR, we'll review it asap. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants