Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] bug fix for old MMCV format for mobilenetv3 configs file (changed to MMEngine format) #3674

Closed
wants to merge 152 commits into from

Conversation

tackhwa
Copy link

@tackhwa tackhwa commented May 20, 2024

the configs file for mobilenetv3 still using the "Runner" settting which are old MMCV format, by refering at this link, "Runner" is replace with train_cfg. with using the old "Runner", even though the iteration is set to 320000, the train_cfg will initialize as 160000 iter, which does not fulfill trainng setting in this link. training and inferencing had been tested on modified configs.

  • Before:
    pasted image 0

  • After: (modified configs)
    pasted image 0

  • mobilenet-v3-d8_lraspp_4xb4-320k_cityscapes-512x1024.py(training)
    pasted image 0

  • mobilenet-v3-d8-scratch_lraspp_4xb4-320k_cityscapes-512x1024.py(training)
    pasted image 0

  • mobilenet-v3-d8-s_lraspp_4xb4-320k_cityscapes-512x1024.py(training)
    pasted image 0

  • mobilenet-v3-d8-scratch-s_lraspp_4xb4-320k_cityscapes-512x1024.py(training)
    pasted image 0

  • mobilenet-v3-d8_lraspp_4xb4-320k_cityscapes-512x1024.py(inferencing)
    pasted image 0

  • mobilenet-v3-d8-scratch_lraspp_4xb4-320k_citys
    pasted image 0
    capes-512x1024.py(inferencing)

  • mobilenet-v3-d8-s_lraspp_4xb4-320k_cityscapes-512x1024.py(inferencing)
    pasted image 0

  • mobilenet-v3-d8-scratch-s_lraspp_4xb4-320k_cityscapes-512x1024.py(inferencing)
    pasted image 0

csatsurnh and others added 30 commits April 3, 2023 11:11
…lab#2829)

## Motivation

If the module does not actually exist, setting locations will report an
error.

open-mmlab/mmengine#1010

## Modification

mmseg/registry/registry.py
## Motivation

As title, lead users to follow our migration document.

## Checklist

- [x] open-mmlab#2801
## Motivation

fix squeeze error when N=1 and C=1

## Modification

fix squeeze error when N=1 and C=1
## Motivation

Update repo information and URLs in README.

## Modification


## BC-breaking (Optional)


## Use cases (Optional)

---------

Co-authored-by: CSH <40987381+csatsurnh@users.noreply.github.com>
…lab#2951)

Motivation

Typo in docs/en/user_guides/visualization_feature_map.md.

Modification

reature -> feature

Checklist

- [x] Pre-commit or other linting tools are used to fix the potential
lint issues.
- [x] The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
- [x] If the modification has potential influence on downstream
projects, this PR should be tested with downstream projects, like MMDet
or MMDet3D.
- [x] The documentation has been modified accordingly, like docstring or
example tutorials.
Thanks for your contribution and we appreciate it a lot. The following
instructions would make your pull request more healthy and more easily
get feedback. If you do not understand some items, don't worry, just
make the pull request and seek help from maintainers.

## Motivation

Support DDRNet
Paper: [Deep Dual-resolution Networks for Real-time and Accurate
Semantic Segmentation of Road Scenes](https://arxiv.org/pdf/2101.06085)
official Code: https://github.com/ydhongHIT/DDRNet


There is already a PR
open-mmlab#1722 , but it has been
inactive for a long time.

## Current Result

### Cityscapes

#### inference with converted official weights

| Method | Backbone      | mIoU(official) | mIoU(converted weight) |
| ------ | ------------- | -------------- | ---------------------- |
| DDRNet | DDRNet23-slim | 77.8           | 77.84                  |
| DDRNet | DDRNet23 | 79.5 | 79.53 |

#### training with converted pretrained backbone

| Method | Backbone | Crop Size | Lr schd | Inf time(fps) | Device |
mIoU | mIoU(ms+flip) | config | download |
| ------ | ------------- | --------- | ------- | ------- | -------- |
----- | ------------- | ------------ | ------------ |
| DDRNet | DDRNet23-slim | 1024x1024 | 120000 | 85.85 | RTX 8000 | 77.85
| 79.80 |
[config](https://github.com/whu-pzhang/mmsegmentation/blob/ddrnet/configs/ddrnet/ddrnet_23-slim_in1k-pre_2xb6-120k_cityscapes-1024x1024.py)
| model \| log |
| DDRNet | DDRNet23 | 1024x1024 | 120000 | 33.41 | RTX 8000 | 79.53 |
80.98 |
[config](https://github.com/whu-pzhang/mmsegmentation/blob/ddrnet/configs/ddrnet/ddrnet_23_in1k-pre_2xb6-120k_cityscapes-1024x1024.py)
| model \| log |


The converted pretrained backbone weights download link:

1.
[ddrnet23s_in1k_mmseg.pth](https://drive.google.com/file/d/1Ni4F1PMGGjuld-1S9fzDTmneLfpMuPTG/view?usp=sharing)
2.
[ddrnet23_in1k_mmseg.pth](https://drive.google.com/file/d/11rsijC1xOWB6B0LgNQkAG-W6e1OdbCyJ/view?usp=sharing)

## To do

- [x] support inference with converted official weights
- [x] support training on cityscapes dataset

---------

Co-authored-by: xiexinch <xiexinch@outlook.com>
likyoo and others added 28 commits September 26, 2023 18:42
…or NVIDIA Jetson (open-mmlab#3372)

Fine tune ONNX Models (MMSegemetation) Inference for NVIDIA Jetson
## Motivation

open-mmlab#3383

## Modification

- Add bpe_simple_vocab_16e6.txt.gz to `MANIFEST.in`
## Motivation

open-mmlab#3384

## Modification

- mmseg/apis/inference.py
Thanks for your contribution and we appreciate it a lot. The following
instructions would make your pull request more healthy and more easily
get feedback. If you do not understand some items, don't worry, just
make the pull request and seek help from maintainers.

## Motivation

Fixes open-mmlab#3412

## Modification

We just need to replace tensor creation using torch.stack() instead of
torch.tensor().

## BC-breaking (Optional)

Does the modification introduce changes that break the
backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the
downstream projects should modify their code to keep compatibility with
this PR.

## Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases
here, and update the documentation.

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
2. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
4. The documentation has been modified accordingly, like docstring or
example tutorials.
Thanks for your contribution and we appreciate it a lot. The following
instructions would make your pull request more healthy and more easily
get feedback. If you do not understand some items, don't worry, just
make the pull request and seek help from maintainers.

## Motivation

Current Visualization Hook can only get instances of
`SegLocalVisualizer`. This makes impossible to use any other custom
implementation.

## Modification

This PR just allows to instantiate a different visualizer (following
mmdetection implementation):

https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/visualization_hook.py#L58
## Motivation

When using `test_cfg` for `data_preprocessor`, `predict_by_feat` resizes
to the original size, not the padded size.

```
data_preprocessor = dict(
    type="SegDataPreProcessor",
    #type="SegDataPreProcessorWithPad",
    mean=[123.675, 116.28, 103.53],
    std=[58.395, 57.12, 57.375],
    bgr_to_rgb=True,
    pad_val=0,
    seg_pad_val=255,
    test_cfg=dict(size=(128, 128)))
```

Refar to:

https://github.com/open-mmlab/mmsegmentation/blob/main/mmseg/models/decode_heads/san_head.py#L589-L592

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
2. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
4. The documentation has been modified accordingly, like docstring or
example tutorials.
…t labels (open-mmlab#3466)

Thanks for your contribution and we appreciate it a lot. The following
instructions would make your pull request more healthy and more easily
get feedback. If you do not understand some items, don't worry, just
make the pull request and seek help from maintainers.

## Motivation

It is difficult to visualize without "labels" when using the inferencer.

- While using the `MMSegInferencer`, the visualized prediction contains
labels on the mask, but it is difficult to pass `withLabels=False`
without rewriting the config (which is harder to do when you initialize
the inferencer with a model name rather than the config).
- I thought it would be easier to just pass `withLabels=False` to
`inferencer.__call__()` since you can also pass `opacity` and other
parameters anyway.

## Modification

Please briefly describe what modification is made in this PR.

- Added `with_labels` to `visualize_kwargs` inside `MMSegInferencer`.
- Modified to `visualize()` function.

## BC-breaking (Optional)

Does the modification introduce changes that break the
backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the
downstream projects should modify their code to keep compatibility with
this PR.

## Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases
here, and update the documentation.

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
2. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
4. The documentation has been modified accordingly, like docstring or
example tutorials.

---------

Co-authored-by: xiexinch <xiexinch@outlook.com>
## Motivation

The motivation is to add a hyperspectral dataset [HSI Drive
2.0](https://ipaccess.ehu.eus/HSI-Drive/) to the dataset registry which
would be, as far as I know, the first hyperspectral database of
mmsegmentation. This database has been presented in [HSI-Drive v2.0:
More Data for New Challenges in Scene Understanding for Autonomous
Driving](https://ieeexplore.ieee.org/document/10371793) and the initival
v1 was presented in [HSI-Drive: A Dataset for the Research of
Hyperspectral Image Processing Applied to Autonomous Driving
Systems](https://ieeexplore.ieee.org/document/9575298)

## Modification

I have created/modified the following aspects:
- READMEs: `README.md` and `README_zh-CN.md` (sorry if translation is
not accurate).
- Example project: `projects/hsidrive20_dataset` has been created and
filled for users to know how to work with this database.
- Documentation: `docs/en/user_guides/2_dataset_prepare.md` and
`docs/zh_cn/user_guides/2_dataset_prepare.md` (sorry if translation is
not accurate) have been updated for users to know how to download and
configure the dataset.
- Database related files: `mmseg/datasets/__init__.py`,
`mmseg/datasets/hsi_drive.py` and `configs/_base_/datasets/hsi_drive.py`
where the dataset is described and also prepared for
training/validation/test.
- Transforms related files:
`mmsegmentation/mmseg/datasets/transforms/loading.py` to *include
support for loading images from .npy files* such as the hyperspectral
images of this dataset.
- Training config with well-known neural network:
`configs/unet/unet-s5-d16_fcn_4xb4-160k_hsidrive-192x384.py` for people
to train a standard neural network with this dataset.
- Tests: added necessary files under
`tests/data/pseudo_hsidrive20_dataset`.

**Important:** I have also modified `.pre-commit-config.yaml` to ignore
HSI error in codespell.

## BC-breaking (Optional)

No.

## Use cases (Optional)

A train example has been added under `projects/hsidrive20_dataset` and
documentation has been updated as it is explained in Modification
section.

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
2. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
4. The documentation has been modified accordingly, like docstring or
example tutorials.

Regarding 1. I don't know how to solve this problem. Could you help me,
please? This causes 2 checks not to be successful.

---------

Co-authored-by: xiexinch <xiexinch@outlook.com>
Thanks for your contribution and we appreciate it a lot. The following
instructions would make your pull request more healthy and more easily
get feedback. If you do not understand some items, don't worry, just
make the pull request and seek help from maintainers.

## Motivation

Please describe the motivation of this PR and the goal you want to
achieve through this PR.

## Modification

Please briefly describe what modification is made in this PR.

## BC-breaking (Optional)

Does the modification introduce changes that break the
backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the
downstream projects should modify their code to keep compatibility with
this PR.

## Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases
here, and update the documentation.

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
2. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
4. The documentation has been modified accordingly, like docstring or
example tutorials.
…edules, default_runtime new configs (open-mmlab#3542)

# [New Configs] Add mmseg/configs folder & Support loveda, potsdam,
schedules, default_runtime new configs
- As the title , the new configs path is mmseg/configs/ 
- The configs files for the dataset have been tested. 
- The purpose of this PR is to enable other community members migrating
to the new config to reference the new configs files for schedules and
default runtime. Hoping for a quick merge~~~.
- Details of this task can be found at:
https://github.com/AI-Tianlong/mmseg-new-config

![image](https://github.com/AI-Tianlong/mmseg-new-config/assets/50650583/04d40057-ff2c-492c-be44-52c6d34d3676)
## Motivation

`inputs` and `input_shape` can't be both set to mmengine api
`get_model_complexity_info`

open-mmlab/mmengine#1056


## Modification

Set `input_shape` to None.
Thanks for your contribution and we appreciate it a lot. The following
instructions would make your pull request more healthy and more easily
get feedback. If you do not understand some items, don't worry, just
make the pull request and seek help from maintainers.

## Motivation

Please describe the motivation of this PR and the goal you want to
achieve through this PR.

## Modification

Please briefly describe what modification is made in this PR.

## BC-breaking (Optional)

Does the modification introduce changes that break the
backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the
downstream projects should modify their code to keep compatibility with
this PR.

## Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases
here, and update the documentation.

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
2. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
4. The documentation has been modified accordingly, like docstring or
example tutorials.
## Motivation

Use `MODELS.build` instead of `build_loss`

## Modification

Please briefly describe what modification is made in this PR.
## Motivation

Fix the bug that data augmentation only takes effect on one image in the
change detection task.

## Modification

configs/base/datasets/levir_256x256.py
configs/swin/swin-tiny-patch4-window7_upernet_1xb8-20k_levir-256x256.py
mmseg/datasets/transforms/transforms.py
## Motivation

The current `SegVisualizationHook` implements the `_after_iter` method,
which is invoked during the validation and testing pipelines. However,
when in
[test_mode](https://github.com/open-mmlab/mmsegmentation/blob/main/mmseg/engine/hooks/visualization_hook.py#L97),
the implementation attempts to access `runner.iter`. This attribute is
defined in the [`mmengine`
codebase](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/runner.py#L538)
and is designed to return `train_loop.iter`. Accessing this property
during testing can be problematic, particularly in scenarios where the
model is being evaluated post-training, without initiating a training
loop. This can lead to a crash if the implementation tries to build a
training dataset for which the annotation file is unavailable at the
time of evaluation. Thus, it is crucial to avoid relying on this
property in test mode.

## Modification

To resolve this issue, the proposal is to replace the `_after_iter`
method with `after_val_iter` and `after_test_iter` methods, modifying
their behavior accordingly. Specifically, when in testing mode, the
implementation should utilize a `test_index` counter instead of
accessing `runner.iter`. This adjustment will circumvent the issue of
accessing `train_loop.iter` during test mode, ensuring the process does
not attempt to access or build a training dataset, thereby preventing
potential crashes due to missing annotation files.
@CLAassistant
Copy link

CLAassistant commented May 20, 2024

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
19 out of 20 committers have signed the CLA.

✅ Ben-Louis
✅ xiexinch
✅ ooooo-create
✅ zen0no
✅ XiandongWang
✅ Zoulinx
✅ crazysteeaam
✅ Yang-Changhui
✅ angiecao
✅ zhen6618
✅ likyoo
✅ AI-Tianlong
✅ okotaku
✅ jonGuti13
✅ JimmyMa99
✅ haruishi43
✅ mmeendez8
✅ MengzhangLI
✅ tackhwa
❌ ZhaoQiiii
You have signed the CLA already but the status is still pending? Let us recheck it.

@tackhwa tackhwa closed this May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet