Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add SOD datasets #913

Closed
wants to merge 10 commits into from
Closed

Conversation

acdart
Copy link
Contributor

@acdart acdart commented Sep 27, 2021

Support DUTS dataset, which is the most popular dataset in salient object detection.

DUTS contains 10,553 training images and 5,019 test images. All training images are collected from the ImageNet DET training/val sets, while test images are collected from the ImageNet DET test set and the SUN data set. Both the training and test set contain very challenging scenarios for saliency detection. Accurate pixel-level ground truths are manually annotated by 50 subjects.

DUTS is currently the largest saliency detection benchmark with the explicit training/test evaluation protocol. For fair comparison in the future research, the training set of DUTS serves as a good candidate for learning DNNs, while the test set and other public data sets can be used for evaluation.

Related links:
https://paperswithcode.com/sota/salient-object-detection-on-duts-te
https://github.com/ArcherFMY/sal_eval_toolbox

@MengzhangLI
Copy link
Contributor

Hi, thanks for your nice pr, we would review it as soon as possible.

Salient object detection is an important branch in semantic segmentation, could you also join us in supporting other representative datasets, such as MSRA, THUR15K, ECSSD and so on?

Best,

@Junjun2016 Junjun2016 added the WIP Work in process label Sep 27, 2021
@codecov
Copy link

codecov bot commented Sep 27, 2021

Codecov Report

Merging #913 (5581198) into master (36eb2d8) will decrease coverage by 0.05%.
The diff coverage is 81.81%.

❗ Current head 5581198 differs from pull request most recent head b6526af. Consider uploading reports for the commit b6526af to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##           master     #913      +/-   ##
==========================================
- Coverage   89.62%   89.56%   -0.06%     
==========================================
  Files         113      117       +4     
  Lines        6263     6307      +44     
  Branches      989      993       +4     
==========================================
+ Hits         5613     5649      +36     
- Misses        452      460       +8     
  Partials      198      198              
Flag Coverage Δ
unittests 89.56% <81.81%> (-0.06%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmseg/datasets/dut_omron.py 80.00% <80.00%> (ø)
mmseg/datasets/duts.py 80.00% <80.00%> (ø)
mmseg/datasets/ecssd.py 80.00% <80.00%> (ø)
mmseg/datasets/hku_is.py 80.00% <80.00%> (ø)
mmseg/datasets/__init__.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 36eb2d8...b6526af. Read the comment docs.

@acdart acdart changed the title [Feature] Add DUTS dataset [Feature] Add SOD datasets Sep 27, 2021
@Junjun2016
Copy link
Collaborator

Junjun2016 commented Sep 28, 2021

Paper: PiCANet
Code: link1, link2.
image
image

@Junjun2016
Copy link
Collaborator

Paper: U^2Net.
Code: link1.
image
image
image
image
image

@Junjun2016
Copy link
Collaborator

U^2Net results:
Screenshot from 2021-09-28 17-40-59

@Junjun2016
Copy link
Collaborator

Junjun2016 commented Sep 28, 2021

Paper: BASNet.
Code: link1.
image
image
image

@Junjun2016
Copy link
Collaborator

@Junjun2016
Copy link
Collaborator

@acdart
Copy link
Contributor Author

acdart commented Sep 28, 2021

abot metrics in SOD

  1. in SOD, metrics are always computed like 'mean(calc_metric(pred[i], gt[i]))', it's different with MMSegmentation
  2. need to add support for some popular metrics: MAE, max-Fmeasure, mean-Fmeasure, Smeasure, Emeasure
  3. score-map output is needed by some metrics like max-Fmeasure

@Junjun2016
Copy link
Collaborator

abot metrics in SOD

  1. in SOD, metrics are always computed like 'mean(calc_metric(pred[i], gt[i]))', it's different with MMSegmentation
  2. need to add support for some popular metrics: MAE, max-Fmeasure, mean-Fmeasure, Smeasure, Emeasure
  3. score-map output is needed by some metrics like max-Fmeasure
  • add a return_logit argument to the forward of segmentor the same as return_loss , pass the argument by kwargs, so we only should modify simple_test, aug_test, and eval_hooks.

  • modify the calculation logic of pre_eval_to_metrics to support sample average metrics.
    image

  • integrate this PR to support reducing background class or not for BCE, especially for one channel case.

@Junjun2016
Copy link
Collaborator

Hi @shinya7y
Could you please help us review this PR together?

@shinya7y
Copy link

Sure. I will only check superficial issues because I'm not familiar with SOD and mmseg.


In salient object detection (SOD), HKU-IS is used for evaluation.

First,download [HKU-IS.rar](https://sites.google.com/site/ligb86/mdfsaliency/).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


### DUTS

First,download [DUTS-TR.zip](http://saliencydetection.net/duts/download/DUTS-TR.zip) and [DUTS-TE.zip](http://saliencydetection.net/duts/download/DUTS-TE.zip) .

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three ) . in this file to ).

@@ -99,7 +101,8 @@ def single_gpu_test(model,
if pre_eval:
# TODO: adapt samples_per_gpu > 1.
# only samples_per_gpu=1 valid now
result = dataset.pre_eval(result, indices=batch_indices)
result = dataset.pre_eval(
result, return_logit, indices=batch_indices)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return_logit=return_logit

@@ -215,7 +223,8 @@ def multi_gpu_test(model,
if pre_eval:
# TODO: adapt samples_per_gpu > 1.
# only samples_per_gpu=1 valid now
result = dataset.pre_eval(result, indices=batch_indices)
result = dataset.pre_eval(
result, return_logit, indices=batch_indices)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return_logit=return_logit

@@ -216,7 +216,7 @@ def whole_inference(self, img, img_meta, rescale):

return seg_logit

def inference(self, img, img_meta, rescale):
def inference(self, img, img_meta, rescale, return_logit):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

def inference(self, img, img_meta, rescale, return_logit=False):

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docstring for new argument.

"""Simple test with single image."""
seg_logit = self.inference(img, img_meta, rescale)
seg_pred = seg_logit.argmax(dim=1)
seg_logit = self.inference(img, img_meta, rescale, return_logit)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return_logit=return_logit in the three calls of self.inference


__all__ = [
'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore',
'eval_metrics', 'get_classes', 'get_palette', 'pre_eval_to_metrics',
'intersect_and_union'
'intersect_and_union', 'calc_sod_metrics', 'eval_sod_metrics', 'pre_eval_to_sod_metrics'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code style is different in many files.
Please refer to https://github.com/open-mmlab/mmsegmentation/blob/master/.github/CONTRIBUTING.md

print('Making directories...')
mmcv.mkdir_or_exist(out_dir)
mmcv.mkdir_or_exist(osp.join(out_dir, 'images'))
mmcv.mkdir_or_exist(osp.join(out_dir, 'images', 'validation'))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many mkdir_or_exist calls in dataset converters seem redundant.

if isinstance(pred_label, str):
pred_label = torch.from_numpy(np.load(pred_label))
else:
pred_label = torch.from_numpy((pred_label))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

two parentheses needed?


assert len(os.listdir(image_dir)) == DUT_OMRON_LEN \
and len(os.listdir(mask_dir)) == \
DUT_OMRON_LEN, 'len(DUT-OMRON) != {}'.format(DUT_OMRON_LEN)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert len(os.listdir(image_dir)) == DUT_OMRON_LEN \
       and len(os.listdir(mask_dir)) == DUT_OMRON_LEN, \
       f'len(DUT-OMRON) != {DUT_OMRON_LEN}'

Similar modifications of the other converters will improve readability.

@@ -30,9 +30,11 @@ def __init__(self,
by_epoch=False,
efficient_test=False,
pre_eval=False,
return_logit=False,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add docstr for this argument

@@ -0,0 +1,67 @@
# Copyright (c) OpenMMLab. All rights reserved.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do this for other scripts too

@Junjun2016
Copy link
Collaborator

Hi @acdart
The CI has been fixed, please merge the master branch.


```shell
python tools/convert_datasets/hku_is.py /path/to/HKU-IS.rar
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could modify the Chinese document accordingly.

img_dir='images/training',
ann_dir='annotations/training',
pipeline=train_pipeline),
val=dict(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use concat dataset (#833) and do evaluation seperately.

@@ -38,6 +38,7 @@ def single_gpu_test(model,
efficient_test=False,
opacity=0.5,
pre_eval=False,
return_logit=False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docstring for this new argument.

if return_logit:
output = seg_logit
else:
if seg_logit.shape[1] >= 2:
Copy link
Collaborator

@Junjun2016 Junjun2016 Nov 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to add some comments for different cases (different shape).

@RockeyCoss
Copy link
Contributor

Please merge the master branch into your branch, thank you.

@MengzhangLI
Copy link
Contributor

Hi, @acdart . Sorry to bother you.

Do you plan to continue finishing this pr? We can spare more time or human resources in the next month if you need it.

Looking forward to your reply.

Best,

@Junjun2016
Copy link
Collaborator

Add class name and palette to mmseg/core/evaluation/class_names.py.

wjkim81 pushed a commit to wjkim81/mmsegmentation that referenced this pull request Dec 3, 2023
sibozhang pushed a commit to sibozhang/mmsegmentation that referenced this pull request Mar 22, 2024
* polish localizer code

* add deprecated warnings

* update warning msg
@acdart acdart closed this by deleting the head repository May 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
WIP Work in process
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants