Skip to content

Commit

Permalink
[Feature] Support DSDL Dataset
Browse files Browse the repository at this point in the history
  • Loading branch information
wufan-tb committed May 10, 2023
1 parent 77591b9 commit 9d7876c
Show file tree
Hide file tree
Showing 15 changed files with 464 additions and 5 deletions.
105 changes: 105 additions & 0 deletions configs/dsdl/README.md
@@ -0,0 +1,105 @@
# DSDL: Standard Description Language for DataSet

<!-- [ALGORITHM] -->

<!-- [DATASET] -->

## Abstract

<!-- [ABSTRACT] -->

Data is the cornerstone of artificial intelligence. The efficiency of data acquisition, exchange, and application directly impacts the advances in technologies and applications. Over the long history of AI, a vast quantity of data sets have been developed and distributed. However, these datasets are defined in very different forms, which incurs significant overhead when it comes to exchange, integration, and utilization -- it is often the case that one needs to develop a new customized tool or script in order to incorporate a new dataset into a workflow.

To overcome such difficulties, we develop **Data Set Description Language (DSDL)**. More details please visit our [official documents](https://opendatalab.github.io/dsdl-docs/getting_started/overview/), dsdl datasets can be downloaded from our platform [OpenDataLab](https://opendatalab.com/).

<!-- [IMAGE] -->

## Steps

- install dsdl and opendatalab:

```
pip install dsdl
pip install opendatalab
```

- install mmseg and pytorch:
please refer this [installation documents](https://mmsegmentation.readthedocs.io/en/latest/get_started.html).

- prepare dsdl dataset (take voc2012 as an example)

- dowaload dsdl dataset (you will need an opendatalab account to do so. [register one now](https://opendatalab.com/))

```
cd data
odl login
odl get PASCAL_VOC2012
```

usually, dataset is compressed on opendatalab platform, the downloaded voc 2012 dataset should be like this:

```
data/
├── PASCAL_VOC2012
│   ├── dsdl
│   │   ├── dsdl_Det_full.zip
│   │   └── dsdl_SemSeg_full.zip
│   ├── raw
│   │   ├── VOC2012test.tar
│   │   ├── VOCdevkit_18-May-2011.tar
│   │   └── VOCtrainval_11-May-2012.tar
│   └── README.md
└── ...
```

- decompress dataset

```
cd dsdl
unzip dsdl_SemSeg_full.zip
```

as we do not need detection dsdl files, we only decompress the semantic segmentation files here.

```
cd ../raw
tar -xvf VOCtrainval_11-May-2012.tar
tar -xvf VOC2012test.tar
cd ../../
```

- change traning config

here , we open the [voc config file](voc.py), and set some file paths as below:

```
data_root = 'data/PASCAL_VOC2012'
img_prefix = 'raw/VOCdevkit/VOC2012'
train_ann = 'dsdl/dsdl_SemSeg_full/set-train/train.yaml'
val_ann = 'dsdl/dsdl_SemSeg_full/set-val/val.yaml'
```

as dsdl datasets with one task using one dataloader, we can simplly change these file paths to train a model on a different dataset.

- train:

- using single gpu:

```
python tools/train.py {config_file}
```

- using slrum:

```
./tools/slurm_train.sh {partition} {job_name} {config_file} {work_dir} {gpu_nums}
```

## Test Results

| Datasets | Model | mIoU(%) | Config |
| :--------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----: | :-----------------------: |
| voc2012 | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3/deeplabv3_r50-d8_512x512_20k_voc12aug/deeplabv3_r50-d8_512x512_20k_voc12aug_20200617_010906-596905ef.pth) | 76.73 | [config](./voc.py) |
| cityscapes | [model](https://download.openmmlab.com/mmsegmentation/v0.5/deeplabv3/deeplabv3_r50-d8_512x1024_40k_cityscapes/deeplabv3_r50-d8_512x1024_40k_cityscapes_20200605_022449-acadc2f8.pth) | 79.01 | [config](./cityscapes.py) |
70 changes: 70 additions & 0 deletions configs/dsdl/cityscapes.py
@@ -0,0 +1,70 @@
_base_ = [
'../_base_/models/deeplabv3_r50-d8.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]

crop_size = (512, 1024)
data_preprocessor = dict(size=crop_size)
model = dict(data_preprocessor=data_preprocessor)
# dataset settings
dataset_type = 'DSDLSegDataset'
data_root = 'data/CityScapes'
img_prefix = 'raw/CityScapes'
train_ann = 'dsdl/dsdl_SemSeg_full/set-train/train.yaml'
val_ann = 'dsdl/dsdl_SemSeg_full/set-val/val.yaml'

used_labels = [
'road', 'sidewalk', 'building', 'wall', 'fence', 'pole', 'traffic_light',
'traffic_sign', 'vegetation', 'terrain', 'sky', 'person', 'rider', 'car',
'truck', 'bus', 'train', 'motorcycle', 'bicycle'
]

train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
type='RandomResize',
scale=(2048, 1024),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(2048, 1024), keep_ratio=True),
# add loading annotation after ``Resize`` because ground truth
# does not need to do resize data transform
dict(type='LoadAnnotations'),
dict(type='PackSegInputs')
]
train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=True,
sampler=dict(type='InfiniteSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(img_path=img_prefix, seg_map_path=img_prefix),
ann_file=train_ann,
used_labels=used_labels,
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(img_path=img_prefix, seg_map_path=img_prefix),
ann_file=val_ann,
used_labels=used_labels,
pipeline=test_pipeline))
test_dataloader = val_dataloader

val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU'])
test_evaluator = val_evaluator
12 changes: 12 additions & 0 deletions configs/dsdl/metafile.yaml
@@ -0,0 +1,12 @@
Collections:
- Name: ''
License: Apache License 2.0
Metadata:
Training Data: []
Paper:
Title: ''
URL: ''
README: configs/dsdl/README.md
Frameworks:
- PyTorch
Models: []
65 changes: 65 additions & 0 deletions configs/dsdl/voc.py
@@ -0,0 +1,65 @@
_base_ = [
'../_base_/models/deeplabv3_r50-d8.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_20k.py'
]

# dataset settings
dataset_type = 'DSDLSegDataset'
data_root = 'data/PASCAL_VOC2012'
img_prefix = 'raw/VOCdevkit/VOC2012'
train_ann = 'dsdl/dsdl_SemSeg_full/set-train/train.yaml'
val_ann = 'dsdl/dsdl_SemSeg_full/set-val/val.yaml'
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
type='RandomResize',
scale=(2048, 512),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(2048, 512), keep_ratio=True),
# add loading annotation after ``Resize`` because ground truth
# does not need to do resize data transform
dict(type='LoadAnnotations'),
dict(type='PackSegInputs')
]
train_dataloader = dict(
batch_size=4,
num_workers=4,
persistent_workers=True,
sampler=dict(type='InfiniteSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(img_path=img_prefix, seg_map_path=img_prefix),
ann_file=train_ann,
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(img_path=img_prefix, seg_map_path=img_prefix),
ann_file=val_ann,
pipeline=test_pipeline))
test_dataloader = val_dataloader

val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU'])
test_evaluator = val_evaluator

data_preprocessor = dict(size=crop_size)
model = dict(
data_preprocessor=data_preprocessor,
decode_head=dict(num_classes=21),
auxiliary_head=dict(num_classes=21))
1 change: 1 addition & 0 deletions docs/en/user_guides/2_dataset_prepare.md
Expand Up @@ -2,6 +2,7 @@

It is recommended to symlink the dataset root to `$MMSEGMENTATION/data`.
If your folder structure is different, you may need to change the corresponding paths in config files.
For users in China, we also recommend you get the dsdl dataset from our opensource platform [OpenDataLab](https://opendatalab.com/), for better download and use experience,here is an example: [DSDLReadme](../../../configs/dsdl/README.md), welcome to try.

```none
mmsegmentation
Expand Down
1 change: 1 addition & 0 deletions docs/zh_cn/user_guides/2_dataset_prepare.md
Expand Up @@ -2,6 +2,7 @@

我们建议将数据集根目录符号链接到 `$MMSEGMENTATION/data`
如果您的目录结构不同,您可能需要更改配置文件中相应的路径。
对于中国境内的用户,我们也推荐通过开源数据平台 [OpenDataLab](https://opendatalab.com/) 来下载dsdl标准数据,以获得更好的下载和使用体验,这里有一个下载dsdl数据集并进行训练的案例[DSDLReadme](../../../configs/dsdl/README.md),欢迎尝试。

```none
mmsegmentation
Expand Down
5 changes: 5 additions & 0 deletions mmseg/datasets/__init__.py
Expand Up @@ -9,6 +9,7 @@
from .dataset_wrappers import MultiImageMixDataset
from .decathlon import DecathlonDataset
from .drive import DRIVEDataset
from .dsdl import DSDLSegDataset
from .hrf import HRFDataset
from .isaid import iSAIDDataset
from .isprs import ISPRSDataset
Expand Down Expand Up @@ -54,7 +55,11 @@
'BioMedicalGaussianNoise', 'BioMedicalGaussianBlur',
'BioMedicalRandomGamma', 'BioMedical3DPad', 'RandomRotFlip',
'SynapseDataset', 'REFUGEDataset', 'MapillaryDataset_v1',
<<<<<<< HEAD
'MapillaryDataset_v2', 'Albu', 'LEVIRCDDataset',
'LoadMultipleRSImageFromFile', 'LoadSingleRSImageFromFile',
'ConcatCDInput', 'BaseCDDataset'
=======
'MapillaryDataset_v2', 'DSDLSegDataset'
>>>>>>> [Feature] Support DSDL Dataset
]

0 comments on commit 9d7876c

Please sign in to comment.