-
Notifications
You must be signed in to change notification settings - Fork 26.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
extend the test files #23043
extend the test files #23043
Conversation
The documentation is not available anymore as the PR was closed or merged. |
67cafdb
to
46934d2
Compare
I need to check if the new test files exist before adding them to the list. Will update later |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have already given my thoughts a thousand times on how such failures should be fixed, but since I'm being ignored again...
This is not my preferred solution, but if we go for this, I'd like the added tests to be logged somewhere, so we can inspect the results. Otherwise we can't debug if there is a failure of the test fetcher, as the generated config is not exactly readable.
@sgugger I don't mean to ignore your suggestion, but the only discussion I remembered is this slack discussion, where you mentioned (to my previous messages) with
Therefore I assume the approach in this PR (current version) is fine. I might forget anything you mentioned earlier somewhere else. In this case, please share the link, and I am happy to take a look your suggestion. Otherwise, feel free to drop what you think the best. Thank you.
If we keep this approach, I am happy to save files of these modified version. |
Note that I am not really in favor of duplicating the tests. We have these tests both in PT/TF or PT/Flax test files. It's already kind of duplication. And if we copy each version to another framework again, that is duplication of duplication. |
Let's go with logging the modified test files somehow, so we can inspect the result of the code you add then. |
@@ -45,7 +45,7 @@ | |||
) | |||
from ...utils import ModelOutput, add_start_docstrings, add_start_docstrings_to_model_forward, logging | |||
from .configuration_big_bird import BigBirdConfig | |||
|
|||
a = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will revert before merge: just to trigger test to see everything is expected
Looks like it didn't touch any new file though. |
c4f1227
to
d666101
Compare
Yeah, I am bad at getting the correct path. Now it works (if I triggered the test via a change in a file), see |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! One last ask to make it easier to navigate and we should be good to go.
.circleci/config.yml
Outdated
- run: | | ||
if [ -f filtered_test_list_torch_and_tf.txt ]; then | ||
mv filtered_test_list_torch_and_tf.txt test_preparation/filtered_test_list_torch_and_tf.txt | ||
else | ||
touch test_preparation/filtered_test_list_torch_and_tf.txt | ||
fi | ||
- run: | | ||
if [ -f filtered_test_list_torch_and_flax.txt ]; then | ||
mv filtered_test_list_torch_and_flax.txt test_preparation/filtered_test_list_torch_and_flax.txt | ||
else | ||
touch test_preparation/filtered_test_list_torch_and_flax.txt | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's put everything in one report for cross tests (named filtered_crossed_test_list for instance)
.circleci/create_circleci_config.py
Outdated
with open(f_path, "w") as fp: | ||
fp.write(" ".join(job.tests_to_run)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can append to f_path
in your for loop with a title being the job name so that everything ends up in the same file.
@sgugger Could you check the last commit and see if it is OK? See the new run page and here One thing to note is that, I put everything regarding cross tests into a single file as well as in the cross test jobs. So |
extended_tests_to_run = sorted(extended_tests_to_run) | ||
for job in jobs: | ||
if job.job_name in ["tests_torch_and_tf", "tests_torch_and_flax"]: | ||
job.tests_to_run = extended_tests_to_run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one single list for all cross test jobs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
* fix --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* fix --------- Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
What does this PR do?
Extend the test files in cross test CI job.
In #21023, flax bigbird modeling file is changed and the flax test file skips the pt/flax tests. However, the test fetcher is not designed to take into account the corresponding pytorch test file, and we have test failure in
main
onnightly
run.This PR extends the test files for
torch_and_tf
andtorch_and_flax
jobs to avoid such situation.The effect could be seen in this run
https://app.circleci.com/pipelines/github/huggingface/transformers/63246/workflows/84722f3a-1259-4226-973d-267c74ca9aee/jobs/780372