-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support DDRNet #1722
base: master
Are you sure you want to change the base?
[Feature] Support DDRNet #1722
Conversation
Hi, thanks for your nice PR. We have planned to support DDRNet and it is very helpful for this repo. First, could you please run normally on those models with provided ckpts? From DDRNet repo Thanks in advance! |
samples_per_gpu=8, | ||
workers_per_gpu=8, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
samples_per_gpu=8, | |
workers_per_gpu=8, | |
samples_per_gpu=4, | |
workers_per_gpu=4, |
To keep the same with former Cityscapes dataset config, we may set default setting 4 GPUs with 4 samplers per GPU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, i just evaluate models on the Cityscapes val set. DDRNet-23-slim gains 77.37% mIoU ,and DDRNet-23 gains 78.66% mIoU .Compared with val results (77.8%,79.5%) from DDRNet repo ,i think results are acceptable,because some basic training tricks such as OHEM are not used.DDRNet-39 gains 79.46% mIoU on Cityscapes val set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, i just evaluate models on the Cityscapes val set. DDRNet-23-slim gains 77.37% mIoU ,and DDRNet-23 gains 78.66% mIoU .Compared with val results (77.8%,79.5%) from DDRNet repo ,i think results are acceptable,because some basic training tricks such as OHEM are not used.DDRNet-39 gains 79.46% mIoU on Cityscapes val set.
Indeed a great job! You successfully transferred pretrained weights and launched DDRNet training in mmseg. ;)
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1722 +/- ##
==========================================
- Coverage 90.24% 88.86% -1.39%
==========================================
Files 142 143 +1
Lines 8486 8654 +168
Branches 1432 1453 +21
==========================================
+ Hits 7658 7690 +32
- Misses 588 724 +136
Partials 240 240
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Hi, @qyyyyq Refactory code is necessary and we may leave out more comments in the next few days. You can refer to my re-implementing code of STDC for example. Original: https://github.com/MichaelFan01/STDC-Seg/blob/master/nets/stdcnet.py Best, |
Hi, @qyyyyq |
What is the status of this PR? |
Hi @qyyyyq !We are grateful for your efforts in helping improve mmsegmentation open-source project during your personal time. Welcome to join OpenMMLab Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/UjgXkPWNqA Thank you again for your contribution❤ |
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers. ## Motivation Support DDRNet Paper: [Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes](https://arxiv.org/pdf/2101.06085) official Code: https://github.com/ydhongHIT/DDRNet There is already a PR #1722 , but it has been inactive for a long time. ## Current Result ### Cityscapes #### inference with converted official weights | Method | Backbone | mIoU(official) | mIoU(converted weight) | | ------ | ------------- | -------------- | ---------------------- | | DDRNet | DDRNet23-slim | 77.8 | 77.84 | | DDRNet | DDRNet23 | 79.5 | 79.53 | #### training with converted pretrained backbone | Method | Backbone | Crop Size | Lr schd | Inf time(fps) | Device | mIoU | mIoU(ms+flip) | config | download | | ------ | ------------- | --------- | ------- | ------- | -------- | ----- | ------------- | ------------ | ------------ | | DDRNet | DDRNet23-slim | 1024x1024 | 120000 | 85.85 | RTX 8000 | 77.85 | 79.80 | [config](https://github.com/whu-pzhang/mmsegmentation/blob/ddrnet/configs/ddrnet/ddrnet_23-slim_in1k-pre_2xb6-120k_cityscapes-1024x1024.py) | model \| log | | DDRNet | DDRNet23 | 1024x1024 | 120000 | 33.41 | RTX 8000 | 79.53 | 80.98 | [config](https://github.com/whu-pzhang/mmsegmentation/blob/ddrnet/configs/ddrnet/ddrnet_23_in1k-pre_2xb6-120k_cityscapes-1024x1024.py) | model \| log | The converted pretrained backbone weights download link: 1. [ddrnet23s_in1k_mmseg.pth](https://drive.google.com/file/d/1Ni4F1PMGGjuld-1S9fzDTmneLfpMuPTG/view?usp=sharing) 2. [ddrnet23_in1k_mmseg.pth](https://drive.google.com/file/d/11rsijC1xOWB6B0LgNQkAG-W6e1OdbCyJ/view?usp=sharing) ## To do - [x] support inference with converted official weights - [x] support training on cityscapes dataset --------- Co-authored-by: xiexinch <xiexinch@outlook.com>
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers. ## Motivation Support DDRNet Paper: [Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes](https://arxiv.org/pdf/2101.06085) official Code: https://github.com/ydhongHIT/DDRNet There is already a PR open-mmlab#1722 , but it has been inactive for a long time. ## Current Result ### Cityscapes #### inference with converted official weights | Method | Backbone | mIoU(official) | mIoU(converted weight) | | ------ | ------------- | -------------- | ---------------------- | | DDRNet | DDRNet23-slim | 77.8 | 77.84 | | DDRNet | DDRNet23 | 79.5 | 79.53 | #### training with converted pretrained backbone | Method | Backbone | Crop Size | Lr schd | Inf time(fps) | Device | mIoU | mIoU(ms+flip) | config | download | | ------ | ------------- | --------- | ------- | ------- | -------- | ----- | ------------- | ------------ | ------------ | | DDRNet | DDRNet23-slim | 1024x1024 | 120000 | 85.85 | RTX 8000 | 77.85 | 79.80 | [config](https://github.com/whu-pzhang/mmsegmentation/blob/ddrnet/configs/ddrnet/ddrnet_23-slim_in1k-pre_2xb6-120k_cityscapes-1024x1024.py) | model \| log | | DDRNet | DDRNet23 | 1024x1024 | 120000 | 33.41 | RTX 8000 | 79.53 | 80.98 | [config](https://github.com/whu-pzhang/mmsegmentation/blob/ddrnet/configs/ddrnet/ddrnet_23_in1k-pre_2xb6-120k_cityscapes-1024x1024.py) | model \| log | The converted pretrained backbone weights download link: 1. [ddrnet23s_in1k_mmseg.pth](https://drive.google.com/file/d/1Ni4F1PMGGjuld-1S9fzDTmneLfpMuPTG/view?usp=sharing) 2. [ddrnet23_in1k_mmseg.pth](https://drive.google.com/file/d/11rsijC1xOWB6B0LgNQkAG-W6e1OdbCyJ/view?usp=sharing) ## To do - [x] support inference with converted official weights - [x] support training on cityscapes dataset --------- Co-authored-by: xiexinch <xiexinch@outlook.com>
Hi. I'm a beginner and I want to try to submit an implementation of DDRNet
Motivation
Support DDRNet
Paper: Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes
official Code: https://github.com/ydhongHIT/DDRNet
Modification
Added backbone code.
Added config file for cityscapes.
Added tools for converting pretrained models.
Note
1 The training tricks in the paper are not used.Only the basic configuration of mmsegmentation is used.
2 Pretrained models(ImageNet),including the official pretrained weight and the converted pretrained weight can be downloaded from https://drive.google.com/drive/folders/1By_LJCoZGN98i-JsP8xv22VnhIvZnm3w