Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added evaluation script for PPHumanSeg model #130

Merged
merged 9 commits into from
Feb 22, 2023

Conversation

labeeb-7z
Copy link
Contributor

This PR is meant to add an evaluation script for the PP Human segmentation model present in the zoo as mentioned in this issue #119 .

I referred to the val script used by PaddleSeg which can be found here and here and implemented the evaluation script for PPHumanSeg in the method other evaluation scripts on zoo are written.

This is the ouput I'm getting after running the evaluation script :

Screenshot from 2023-02-10 09-55-30

Let me know if this PR can be improved further. Thanks!

@fengyuentau fengyuentau self-assigned this Feb 10, 2023
@fengyuentau fengyuentau self-requested a review February 10, 2023 08:39
@fengyuentau fengyuentau added the evaluation adding tools for evaluation or bugs of eval scripts label Feb 10, 2023
Copy link
Member

@fengyuentau fengyuentau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for contribution! Please, take a look at the comments below.

tools/eval/README.md Outdated Show resolved Hide resolved
tools/eval/README.md Outdated Show resolved Hide resolved
tools/eval/datasets/pp_humanseg.py Outdated Show resolved Hide resolved
tools/eval/datasets/pp_humanseg.py Outdated Show resolved Hide resolved
tools/eval/datasets/pp_humanseg.py Outdated Show resolved Hide resolved
tools/eval/eval.py Show resolved Hide resolved
tools/eval/eval.py Outdated Show resolved Hide resolved
@WanliZhong
Copy link
Member

WanliZhong commented Feb 10, 2023

@labeeb-7z Thanks for your contribution! Could you add the evaluation results of fp32 and int8 models on PPhumanseg Readme? You can add the information like the example below:


Results of accuracy evaluation with tools/eval.

Models Accuracy
PPHumanSeg 0.9023
PPHumanSeg quant 0.xxxx

*: 'quant' stands for 'quantized'.

@labeeb-7z
Copy link
Contributor Author

@WanliZhong here's the output I got for quantized model :
image

@WanliZhong
Copy link
Member

@WanliZhong here's the output I got for quantized model : image

There is too much loss of accuracy in the quantized model and we need to consider re-quantizing the model. Thank you for the evaluation script! 👍

Copy link
Member

@fengyuentau fengyuentau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor comments below. I will have a check on the implementation of the evaluation metric later.

models/human_segmentation_pphumanseg/README.md Outdated Show resolved Hide resolved
tools/eval/eval.py Outdated Show resolved Hide resolved
@labeeb-7z
Copy link
Contributor Author

labeeb-7z commented Feb 11, 2023

@fengyuentau For more reference on the evaluation metric you can have a look at the metrics module of PaddleSeg which is used by the evaluation script of PaddleSeg I linked in PR description.

Copy link
Member

@fengyuentau fengyuentau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran the official evaluation script and the mIoU of the model should be 0.8602. I found there are many differences between your implementation and the official evaluation script. Below are some differences I found.

I am not saying you have to follow the official implementation completely but just make sure it is correct. If you cannot ensure correctness, it will save a lot of time.

tools/eval/datasets/minisupervisely.py Outdated Show resolved Hide resolved
tools/eval/datasets/minisupervisely.py Outdated Show resolved Hide resolved
tools/eval/datasets/minisupervisely.py Outdated Show resolved Hide resolved
tools/eval/datasets/minisupervisely.py Outdated Show resolved Hide resolved
@labeeb-7z
Copy link
Contributor Author

I've moved the *_all variables outside the loop (apologies for that). As for ignoring index and adding 1, I've tried to followed the PaddleSeg implementation. And for the one_hot vectors, again I have implemented the one_hot function of PaddleSeg and followed their evaluation script.

Let me know of any other changes. The updated miou is :
image

Copy link
Member

@fengyuentau fengyuentau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the update! Lets try to reproduce the exact mIoU number.

Comment on lines 55 to 56
pbar.set_description(
"Evaluating {} with {} val set".format(model.name, self.name))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two lines should be placed outside the for loop since the description does not need to be updated every single iteration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was referred from eval script of icdar.
I've changed it for this script, should I also change it for icdar?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've changed it for this script, should I also change it for icdar?

Not in this pull request.

tools/eval/datasets/minisupervisely.py Outdated Show resolved Hide resolved
Copy link
Member

@fengyuentau fengyuentau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good 👍 We can merge this pull request after the comments below are resolved.

models/human_segmentation_pphumanseg/README.md Outdated Show resolved Hide resolved
models/human_segmentation_pphumanseg/pphumanseg.py Outdated Show resolved Hide resolved
tools/eval/README.md Outdated Show resolved Hide resolved
tools/eval/README.md Outdated Show resolved Hide resolved
tools/eval/README.md Outdated Show resolved Hide resolved
tools/eval/datasets/minisupervisely.py Show resolved Hide resolved
@labeeb-7z
Copy link
Contributor Author

@fengyuentau I've made the suggested changes, Thanks!

Copy link
Member

@fengyuentau fengyuentau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for contribution! 👍

@fengyuentau fengyuentau merged commit 578fa8e into opencv:master Feb 22, 2023
fengyuentau pushed a commit that referenced this pull request Jun 8, 2023
* added evaluation script for PPHumanSeg

* added quantized model, renamed dataset

* minor spacing changes

* moved _all variables outside loop and updated accuracy

* removed printing for class accuracy and IoU

* added 2 transforms

* evaluation done on same size tensor as input size with mIoU 0.9085

* final changes

* added mIoU and reference
@WanliZhong WanliZhong added this to the 4.9.0 (first release) milestone Dec 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evaluation adding tools for evaluation or bugs of eval scripts
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants