Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/dev-1.x' into AI-Tianlong/add-…
Browse files Browse the repository at this point in the history
…mapillary-to-core-package
  • Loading branch information
xiexinch committed Mar 15, 2023
2 parents b270993 + 447a398 commit 7751b78
Show file tree
Hide file tree
Showing 16 changed files with 1,042 additions and 97 deletions.
2 changes: 1 addition & 1 deletion README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ MMSegmentation 是一个基于 PyTorch 的语义分割开源工具箱。它是 O

同时,我们提供了 Colab 教程。你可以在[这里](demo/MMSegmentation_Tutorial.ipynb)浏览教程,或者直接在 Colab 上[运行](https://colab.research.google.com/github/open-mmlab/mmsegmentation/blob/1.x/demo/MMSegmentation_Tutorial.ipynb)

若需要将0.x版本的代码迁移至新版,请参考[迁移文档](docs/zh_cn/migration.md)
若需要将0.x版本的代码迁移至新版,请参考[迁移文档](docs/zh_cn/migration)

## 基准测试和模型库

Expand Down
28 changes: 14 additions & 14 deletions docs/en/advanced_guides/evaluation.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Evaluation

The evaluation procedure would be executed at [ValLoop](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L300) and [TestLoop](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L373), users can evaluate model performance during training or using the test script with simple settings in the configuration file. The `ValLoop` and `TestLoop` are properties of [Runner](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/runner.py#L59), they will be built the first time they are called. To build the `ValLoop` successfully, the `val_dataloader` and `val_evaluator` must be set when building `Runner` since `dataloder` and `evaluator` are required parameters, and the same goes for `TestLoop`. For more information about the Runner's design, please refer to the [documentoation](https://github.com/open-mmlab/mmengine/blob/main/docs/en/design/runner.md) of [MMEngine](https://github.com/open-mmlab/mmengine).
The evaluation procedure would be executed at [ValLoop](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L300) and [TestLoop](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L373), users can evaluate model performance during training or using the test script with simple settings in the configuration file. The `ValLoop` and `TestLoop` are properties of [Runner](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/runner.py#L59), they will be built the first time they are called. To build the `ValLoop` successfully, the `val_dataloader` and `val_evaluator` must be set when building `Runner` since `dataloader` and `evaluator` are required parameters, and the same goes for `TestLoop`. For more information about the Runner's design, please refer to the [documentation](https://github.com/open-mmlab/mmengine/blob/main/docs/en/design/runner.md) of [MMEngine](https://github.com/open-mmlab/mmengine).

<center>
<img src='../../../resources/test_step.png' />
Expand Down Expand Up @@ -61,7 +61,7 @@ In MMSegmentation, the settings of `test_dataloader` and `test_evaluator` are th

## IoUMetric

MMSegmentation implements [IoUMetric](https://github.com/open-mmlab/mmsegmentation/blob/1.x/mmseg/evaluation/metrics/iou_metric.py) and [CitysMetric](https://github.com/open-mmlab/mmsegmentation/blob/1.x/mmseg/evaluation/metrics/citys_metric.py) for evaluating the performance of models, based on the [BaseMetric](https://github.com/open-mmlab/mmengine/blob/main/mmengine/evaluator/metric.py) provided by [MMEngine](https://github.com/open-mmlab/mmengine). Please refer to [the documentation](https://mmengine.readthedocs.io/en/latest/tutorials/evaluation.html) for more details about the unified evaluation interface.
MMSegmentation implements [IoUMetric](https://github.com/open-mmlab/mmsegmentation/blob/1.x/mmseg/evaluation/metrics/iou_metric.py) and [CityscapesMetric](https://github.com/open-mmlab/mmsegmentation/blob/1.x/mmseg/evaluation/metrics/citys_metric.py) for evaluating the performance of models, based on the [BaseMetric](https://github.com/open-mmlab/mmengine/blob/main/mmengine/evaluator/metric.py) provided by [MMEngine](https://github.com/open-mmlab/mmengine). Please refer to [the documentation](https://mmengine.readthedocs.io/en/latest/tutorials/evaluation.html) for more details about the unified evaluation interface.

Here we briefly describe the arguments and the two main methods of `IoUMetric`.

Expand Down Expand Up @@ -102,9 +102,9 @@ Returns:

- Dict\[str, float\] - The computed metrics. The keys are the names of the metrics, and the values are corresponding results. The key mainly includes **aAcc**, **mIoU**, **mAcc**, **mDice**, **mFscore**, **mPrecision**, **mRecall**.

## CitysMetric
## CityscapesMetric

`CitysMetric` uses the official [CityscapesScripts](https://github.com/mcordts/cityscapesScripts) provided by Cityscapes to evaluate model performance.
`CityscapesMetric` uses the official [CityscapesScripts](https://github.com/mcordts/cityscapesScripts) provided by Cityscapes to evaluate model performance.

### Usage

Expand All @@ -114,38 +114,38 @@ Before using it, please install the `cityscapesscripts` package first:
pip install cityscapesscripts
```

Since the `IoUMetric` is used as the default evaluator in MMSegmentation, if you would like to use `CitysMetric`, customizing the config file is required. In your customized config file, you should overwrite the default evaluator as follows.
Since the `IoUMetric` is used as the default evaluator in MMSegmentation, if you would like to use `CityscapesMetric`, customizing the config file is required. In your customized config file, you should overwrite the default evaluator as follows.

```python
val_evaluator = dict(type='CitysMetric', citys_metrics=['cityscapes'])
val_evaluator = dict(type='CityscapesMetric', output_dir='tmp')
test_evaluator = val_evaluator
```

### Interface

The arguments of the constructor:

- output_dir (str) - The directory for output prediction
- ignore_index (int) - Index that will be ignored in evaluation. Default: 255.
- citys_metrics (list\[str\] | str) - Metrics to be evaluated, Default: \['cityscapes'\].
- to_label_id (bool) - whether convert output to label_id for submission. Default: True.
- suffix (str): The filename prefix of the png files. If the prefix is "somepath/xxx", the png files will be named "somepath/xxx.png". Default: '.format_cityscapes'.
- collect_device (str): Device name used for collecting results from different ranks during distributed training. Must be 'cpu' or 'gpu'. Defaults to 'cpu'.
- prefix (str, optional): The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If the prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.
- format_only (bool) - Only format result for results commit without perform evaluation. It is useful when you want to format the result to a specific format and submit it to the test server. Defaults to False.
- keep_results (bool) - Whether to keep the results. When `format_only` is True, `keep_results` must be True. Defaults to False.
- collect_device (str) - Device name used for collecting results from different ranks during distributed training. Must be 'cpu' or 'gpu'. Defaults to 'cpu'.
- prefix (str, optional) - The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

#### CitysMetric.process
#### CityscapesMetric.process

This method would draw the masks on images and save the painted images to `work_dir`.

Parameters:

- data_batch (Any) - A batch of data from the dataloader.
- data_batch (dict) - A batch of data from the dataloader.
- data_samples (Sequence\[dict\]) - A batch of outputs from the model.

Returns:

This method doesn't have returns, the annotations' path would be stored in `self.results`, which will be used to compute the metrics when all batches have been processed.

#### CitysMetric.compute_metrics
#### CityscapesMetric.compute_metrics

This method would call `cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling` tool to calculate metrics.

Expand Down
24 changes: 7 additions & 17 deletions docs/en/advanced_guides/transforms.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,10 @@ The structure of this guide is as follows:

## Design of Data pipelines

Following typical conventions, we use `Dataset` and `DataLoader` for data loading
with multiple workers. `Dataset` returns a dict of data items corresponding
the arguments of models' forward method.
Since the data in semantic segmentation may not be the same size,
we introduce a new `DataContainer` type in MMCV to help collect and distribute
data of different size.
See [here](https://github.com/open-mmlab/mmcv/blob/master/mmcv/parallel/data_container.py) for more details.
Following typical conventions, we use `Dataset` and `DataLoader` for data loading with multiple workers. `Dataset` returns a dict of data items corresponding the arguments of models' forward method. Since the data in semantic segmentation may not be the same size, we introduce a new `DataContainer` type in MMCV to help collect and distribute data of different size. See [here](https://github.com/open-mmlab/mmcv/blob/master/mmcv/parallel/data_container.py) for more details.

In 1.x version of MMSegmentation, all data transformations are inherited from [`BaseTransform`](https://github.com/open-mmlab/mmcv/blob/2.x/mmcv/transforms/base.py#L6).

The input and output types of transformations are both dict. A simple example is as follows:

```python
Expand All @@ -38,13 +33,11 @@ The input and output types of transformations are both dict. A simple example is
dict_keys(['img_path', 'seg_map_path', 'reduce_zero_label', 'seg_fields', 'gt_seg_map'])
```

The data preparation pipeline and the dataset are decomposed. Usually a dataset
defines how to process the annotations and a data pipeline defines all the steps to prepare a data dict.
A pipeline consists of a sequence of operations. Each operation takes a dict as input and also outputs a dict for the next transform.
The data preparation pipeline and the dataset are decomposed. Usually a dataset defines how to process the annotations and a data pipeline defines all the steps to prepare a data dict. A pipeline consists of a sequence of operations. Each operation takes a dict as input and also outputs a dict for the next transform.

The operations are categorized into data loading, pre-processing, formatting and test-time augmentation.

Here is a pipeline example for PSPNet.
Here is a pipeline example for PSPNet:

```python
crop_size = (512, 1024)
Expand All @@ -71,8 +64,7 @@ test_pipeline = [
]
```

For each operation, we list the related dict fields that are `added`/`updated`/`removed`.
Before pipelines, the information we can directly obtain from the datasets are `img_path` and `seg_map_path`.
For each operation, we list the related dict fields that are `added`/`updated`/`removed`. Before pipelines, the information we can directly obtain from the datasets are `img_path` and `seg_map_path`.

### Data loading

Expand All @@ -98,16 +90,14 @@ Before pipelines, the information we can directly obtain from the datasets are `

`RandomCrop`: Random crop image & segmentation map.

- update: `img`, `gt_seg_map`, `img_shape`.
- update: `img`, `gt_seg_map`, `img_shape`

`RandomFlip`: Flip the image & segmentation map.

- add: `flip`, `flip_direction`
- update: `img`, `gt_seg_map`

`PhotoMetricDistortion`: Apply photometric distortion to image sequentially,
every transformation is applied with a probability of 0.5.
The position of random contrast is in second or second to last(mode 0 or 1 below, respectively).
`PhotoMetricDistortion`: Apply photometric distortion to image sequentially, every transformation is applied with a probability of 0.5. The position of random contrast is in second or second to last(mode 0 or 1 below, respectively).

```
1. random brightness
Expand Down
39 changes: 16 additions & 23 deletions docs/en/migration/interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,17 @@ This guide describes the fundamental differences between MMSegmentation 0.x and

## New dependencies

MMSegmentation 1.x depends on some new packages, you can prepare a new clean environment and install again according to the [installation tutorial](get_started.md).
MMSegmentation 1.x depends on some new packages, you can prepare a new clean environment and install again according to the [installation tutorial](../get_started.md).

Or install the below packages manually.

1. [MMEngine](https://github.com/open-mmlab/mmengine): MMEngine is the core the OpenMMLab 2.0 architecture, and we splited many compentents unrelated to computer vision from MMCV to MMEngine.

2. [MMCV](https://github.com/open-mmlab/mmcv): The computer vision package of OpenMMLab. This is not a new dependency, but you need to upgrade it to above **2.0.0rc1** version.

3. [MMClassification](https://github.com/open-mmlab/mmclassification)(Optional): The image classification toolbox and benchmark of OpenMMLab. This is not a new dependency, but you need to upgrade it to above **1.0.0rc0** version.
3. [MMClassification](https://github.com/open-mmlab/mmclassification)(Optional): The image classification toolbox and benchmark of OpenMMLab. This is not a new dependency, but you need to upgrade it to above **1.0.0rc0** version.

4. [MMDetection](https://github.com/open-mmlab/mmdetection)(Optional): The object detection toolbox and benchmark of OpenMMLab. This is not a new dependency, but you need to upgrade it to above **3.0.0rc0** version.

## Train launch

Expand Down Expand Up @@ -86,7 +89,7 @@ Add `model.data_preprocessor` field to configure the `DataPreProcessor`, includi

- `bgr_to_rgb` (bool): whether to convert image from BGR to RGB.Defaults to False.

- `rgb_to_bgr` (bool): whether to convert image from RGB to RGB. Defaults to False.
- `rgb_to_bgr` (bool): whether to convert image from RGB to BGR. Defaults to False.

**Note:**
Please refer [models documentation](../advanced_guides/models.md) for more details.
Expand Down Expand Up @@ -260,8 +263,7 @@ tta_pipeline = [
Changes in **`evaluation`**:

- The **`evaluation`** field is split to `val_evaluator` and `test_evaluator`. And it won't support `interval` and `save_best` arguments.
The `interval` is moved to `train_cfg.val_interval`, and the `save_best`
is moved to `default_hooks.checkpoint.save_best`. `pre_eval` has been removed.
The `interval` is moved to `train_cfg.val_interval`, and the `save_best` is moved to `default_hooks.checkpoint.save_best`. `pre_eval` has been removed.
- `'mIoU'` has been changed to `'IoUMetric'`.

<table class="docutils">
Expand Down Expand Up @@ -291,8 +293,7 @@ test_evaluator = val_evaluator

Changes in **`optimizer`** and **`optimizer_config`**:

- Now we use `optim_wrapper` field to specify all configuration about the optimization process. And the
`optimizer` is a sub field of `optim_wrapper` now.
- Now we use `optim_wrapper` field to specify all configuration about the optimization process. And the `optimizer` is a sub field of `optim_wrapper` now.
- `paramwise_cfg` is also a sub field of `optim_wrapper`, instead of `optimizer`.
- `optimizer_config` is removed now, and all configurations of it are moved to `optim_wrapper`.
- `grad_clip` is renamed to `clip_grad`.
Expand Down Expand Up @@ -326,11 +327,9 @@ optim_wrapper = dict(
Changes in **`lr_config`**:

- The `lr_config` field is removed and we use new `param_scheduler` to replace it.
- The `warmup` related arguments are removed, since we use schedulers combination to implement this
functionality.
- The `warmup` related arguments are removed, since we use schedulers combination to implement this functionality.

The new schedulers combination mechanism is very flexible, and you can use it to design many kinds of learning
rate / momentum curves. See [the tutorial](TODO) for more details.
The new schedulers combination mechanism is very flexible, and you can use it to design many kinds of learning rate / momentum curves. See [the tutorial](TODO) for more details.

<table class="docutils">
<tr>
Expand Down Expand Up @@ -374,8 +373,7 @@ param_scheduler = [

Changes in **`runner`**:

Most configuration in the original `runner` field is moved to `train_cfg`, `val_cfg` and `test_cfg`, which
configure the loop in training, validation and test.
Most configuration in the original `runner` field is moved to `train_cfg`, `val_cfg` and `test_cfg`, which configure the loop in training, validation and test.

<table class="docutils">
<tr>
Expand All @@ -402,8 +400,7 @@ test_cfg = dict(type='TestLoop') # Use the default test loop.
</tr>
</table>

In fact, in OpenMMLab 2.0, we introduced `Loop` to control the behaviors in training, validation and test. The functionalities of `Runner` are also changed. You can find more details of [runner tutorial](https://github.com/open-mmlab/mmengine/blob/main/docs/en/design/runner.md)
in [MMEngine](https://github.com/open-mmlab/mmengine/).
In fact, in OpenMMLab 2.0, we introduced `Loop` to control the behaviors in training, validation and test. The functionalities of `Runner` are also changed. You can find more details of [runner tutorial](https://github.com/open-mmlab/mmengine/blob/main/docs/en/design/runner.md) in [MMEngine](https://github.com/open-mmlab/mmengine/).

### Runtime settings

Expand Down Expand Up @@ -433,8 +430,7 @@ default_hooks = dict(
visualization=dict(type='SegVisualizationHook'))
```

In addition, we split the original logger to logger and visualizer. The logger is used to record
information and the visualizer is used to show the logger in different backends, like terminal and TensorBoard.
In addition, we split the original logger to logger and visualizer. The logger is used to record information and the visualizer is used to show the logger in different backends, like terminal and TensorBoard.

<table class="docutils">
<tr>
Expand Down Expand Up @@ -478,8 +474,7 @@ Changes in **`load_from`** and **`resume_from`**:
- If `resume=False` and `load_from` is **not None**, only load the checkpoint, not resume training.
- If `resume=False` and `load_from` is **None**, do not load nor resume.

Changes in **`dist_params`**: The `dist_params` field is a sub field of `env_cfg` now. And there are some new
configurations in the `env_cfg`.
Changes in **`dist_params`**: The `dist_params` field is a sub field of `env_cfg` now. And there are some new configurations in the `env_cfg`.

```python
env_cfg = dict(
Expand All @@ -496,8 +491,6 @@ env_cfg = dict(

Changes in **`workflow`**: `workflow` related functionalities are removed.

New field **`visualizer`**: The visualizer is a new design in OpenMMLab 2.0 architecture. We use a
visualizer instance in the runner to handle results & log visualization and save to different backends.
See the [visualization tutorial](user_guides/visualization.md) for more details.
New field **`visualizer`**: The visualizer is a new design in OpenMMLab 2.0 architecture. We use a visualizer instance in the runner to handle results & log visualization and save to different backends. See the [visualization tutorial](../user_guides/visualization.md) for more details.

New field **`default_scope`**: The start point to search module for all registries. The `default_scope` in MMSegmentation is `mmseg`. See [the registry tutorial](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/registry.md) for more details.
New field **`default_scope`**: The start point to search module for all registries. The `default_scope` in MMSegmentation is `mmseg`. See [the registry tutorial](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/registry.md) for more details.
Loading

0 comments on commit 7751b78

Please sign in to comment.