Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add straight-eval arg in evaluate script #793

Merged
merged 5 commits into from
Jan 19, 2022
Merged

feat: add straight-eval arg in evaluate script #793

merged 5 commits into from
Jan 19, 2022

Conversation

charlesmindee
Copy link
Collaborator

This PR adds a straight-eval arg in the evaluate script to allow using the metrics with straight bounding boxes while using a detection predictor working with polygons (assume straight pages can be False).
I also fixed a bug in the predictor to cast to numpy before calling the extraction function (when we use a dataset we have tf/pt tensors otherwise)

Any feedback is welcome!

@charlesmindee charlesmindee self-assigned this Jan 10, 2022
@charlesmindee charlesmindee added ext: scripts Related to scripts folder framework: pytorch Related to PyTorch backend framework: tensorflow Related to TensorFlow backend module: models Related to doctr.models topic: text detection Related to the task of text detection type: enhancement Improvement labels Jan 10, 2022
@charlesmindee charlesmindee added this to the 0.5.1 milestone Jan 10, 2022
@codecov
Copy link

codecov bot commented Jan 10, 2022

Codecov Report

Merging #793 (7cbfc60) into main (fd850e5) will decrease coverage by 0.05%.
The diff coverage is 50.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #793      +/-   ##
==========================================
- Coverage   96.06%   96.00%   -0.06%     
==========================================
  Files         130      131       +1     
  Lines        4901     4937      +36     
==========================================
+ Hits         4708     4740      +32     
- Misses        193      197       +4     
Flag Coverage Δ
unittests 96.00% <50.00%> (-0.06%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
doctr/models/predictor/pytorch.py 97.14% <0.00%> (-0.36%) ⬇️
doctr/models/predictor/tensorflow.py 100.00% <100.00%> (ø)
doctr/models/detection/linknet/base.py 93.97% <0.00%> (-1.21%) ⬇️
doctr/models/zoo.py 100.00% <0.00%> (ø)
doctr/datasets/ocr.py 92.30% <0.00%> (ø)
doctr/datasets/ic13.py 96.15% <0.00%> (ø)
doctr/datasets/loader.py 100.00% <0.00%> (ø)
doctr/datasets/__init__.py 100.00% <0.00%> (ø)
doctr/datasets/detection.py 95.65% <0.00%> (ø)
doctr/datasets/recognition.py 80.95% <0.00%> (ø)
... and 7 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fd850e5...7cbfc60. Read the comment docs.

@fg-mindee
Copy link
Contributor

The CI job for the evaluate script is failing :/

Copy link
Contributor

@fg-mindee fg-mindee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I only added a small question as a comment

doctr/models/predictor/tensorflow.py Outdated Show resolved Hide resolved
fg-mindee
fg-mindee previously approved these changes Jan 18, 2022
Copy link
Contributor

@fg-mindee fg-mindee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!
I think some of the code of the evaluation script could be refactored, but this could be done in another PR! Either way, we'll need to reharmonize the evaluation script for specific tasks (in references) with this one

scripts/evaluate.py Outdated Show resolved Hide resolved
Copy link
Contributor

@fg-mindee fg-mindee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cheers!

@charlesmindee charlesmindee merged commit a4f22ba into main Jan 19, 2022
@charlesmindee charlesmindee deleted the eval branch January 19, 2022 08:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ext: scripts Related to scripts folder framework: pytorch Related to PyTorch backend framework: tensorflow Related to TensorFlow backend module: models Related to doctr.models topic: text detection Related to the task of text detection type: enhancement Improvement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants