-
Notifications
You must be signed in to change notification settings - Fork 26.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Slack notification support for doc tests #16253
Conversation
…e/transformers into add_doc_tests_reports
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for working on this!
docs/source/quicktour.mdx
Outdated
<tf.Tensor: shape=(2, 5), dtype=float32, numpy= | ||
array([[0.00205667, 0.00176607, 0.01154934, 0.21209256, 0.7725354 ], | ||
[0.20841935, 0.1826222 , 0.19692972, 0.17550416, 0.2365246 ]], | ||
dtype=float32)> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is super flaky and won't pass on any other machine or at the new TF release. We need to test at a lesser precision or ignore the output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah true! Still don't understand why my local pytest
or python -m pytest
doesn't pick up the precision flag here...Will adapt!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to fight with it too on a branch with little success. I'd advocate to not lose sleep on it and just ignore the output since we can't get a result that passes both locally for me and on the runner :-)
n_blanks_left = 25 | ||
n_blanks_right = 15 | ||
category_failures = {k: len(v["failed"]) for k, v in doc_test_results.items() if isinstance(v, dict)} | ||
category_failures_report = [f"| {category}".ljust(n_blanks_left, " ") + f"| {num}".ljust(n_blanks_right, " ") + "|" for category, num in category_failures.items()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs some black 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
I will look this later today. Still learn this kind of things. If it is urgent, don't hesitate to go ahead though. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will continue the review process, but here are what I have for now.
utils/documentation_tests.txt
Outdated
@@ -31,6 +31,6 @@ src/transformers/models/blenderbot/modeling_blenderbot.py | |||
src/transformers/models/blenderbot_small/modeling_blenderbot_small.py | |||
src/transformers/models/plbart/modeling_plbart.py | |||
src/transformers/generation_utils.py | |||
src/transformers/models/resnet/modeling_resnet.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kind get the idea. If I understand it correctly, here are my 2 cents:
- maybe move this line before
src/transformers/generation_utils.py
? - maybe in a future PR, we can have a script to sort the lines by some order (say, alphabetic)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alphabetic sort is a very good idea!
.github/workflows/doctests.yml
Outdated
- "docs/**" | ||
types: [assigned, opened, synchronize, reopened] | ||
# schedule: | ||
# - cron: "0 0 * * *" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guess this is temporary (to check this PR works), and will be changed back to schedule at the end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned offline, I think it would be clearer if instead of being split in categories, we would see directly the file that failed (and would help with regards to ownership).
Other than that, this looks good to me!
LGTM (without looking the notification script in detail). Just 2 questions (due to my lack of expertise):
with:
repository: 'huggingface/transformers'
path: transformers (just wondering why we don't need this) |
Very good question! And I don't really know the answer :D I just noticed that it didn't work with with:
repository: 'huggingface/transformers'
path: transformers very well and I also think it's not necessary here so I removed it. Maybe @LysandreJik can give you a better answer. |
* up * up * up * fix * yeh * ups * Empty test commit * correct quicktour * correct * correct * up * up * uP * uP * up * up * uP * up * up * up * up * up * up * up * up * up * up * Update src/transformers/models/van/modeling_van.py * finish * apply suggestions * remove folder * revert to daily testing
What does this PR do?
This PR enables Slack reports for the daily doc tests.