Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scripts to evaluate models in the zoo on different datasets #69

Merged
merged 1 commit into from
Jul 8, 2022
Merged

Add scripts to evaluate models in the zoo on different datasets #69

merged 1 commit into from
Jul 8, 2022

Conversation

fengyuentau
Copy link
Member

@fengyuentau fengyuentau commented Jul 6, 2022

  • Evaluation framework
  • Initial dataset for evaluation: ImageNet
  • Report mobilenet accuracy
  • Report ppresnet accuracy

Note: the accuracy of PP-ResNet quantized is very low, which seems to be bad quantization.

@fengyuentau fengyuentau self-assigned this Jul 6, 2022
@fengyuentau fengyuentau added the feature New feature or request label Jul 6, 2022
@fengyuentau
Copy link
Member Author

cc @WanliZhong

Copy link
Member

@zihaomu zihaomu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! 👍

)

def main(args):
# Instantiate model
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the beginning of the script we can print the model name and test data home name.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Such information is printed in dataset.eval() like the following:

image

@fengyuentau
Copy link
Member Author

The issue of low accuracy of quantized PP-ResNet will be fixed in another pull request. Merge as is.

@fengyuentau fengyuentau merged commit 580ef45 into opencv:master Jul 8, 2022
@fengyuentau fengyuentau deleted the add_eval_new branch July 8, 2022 06:45
@fengyuentau fengyuentau added evaluation adding tools for evaluation or bugs of eval scripts and removed feature New feature or request labels Jul 13, 2022
@fengyuentau fengyuentau added this to the 4.9.0 (first release) milestone Dec 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evaluation adding tools for evaluation or bugs of eval scripts
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants