-
Notifications
You must be signed in to change notification settings - Fork 91
Updated workflows to store pytest runtimes as test artifacts #2448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## main #2448 +/- ##
=====================================
Coverage 99.7% 99.7%
=====================================
Files 283 283
Lines 25500 25500
=====================================
Hits 25400 25400
Misses 100 100 Continue to review full report at Codecov.
|
@@ -3,6 +3,7 @@ python_files = evalml/tests/* | |||
filterwarnings = | |||
ignore::DeprecationWarning | |||
ignore::PendingDeprecationWarning | |||
junit_duration_report = call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This flag is set so that we don't report setup/teardown time, just the call time: https://docs.pytest.org/en/6.2.x/reference.html#confval-junit_duration_report
Makefile
Outdated
|
||
.PHONY: git-test-other-core | ||
git-test-other-core: | ||
pytest evalml/tests --ignore evalml/tests/automl_tests/ --ignore evalml/tests/tuner_tests/ --ignore evalml/tests/model_understanding_tests/ -n 2 --durations 100 --timeout 300 --doctest-modules --cov=evalml --junitxml=test-reports/junit.xml --doctest-continue-on-failure --has-minimal-dependencies | ||
pytest evalml/tests --ignore evalml/tests/automl_tests/ --ignore evalml/tests/tuner_tests/ --ignore evalml/tests/model_understanding_tests/ -n 2 --durations 100 --timeout 300 --doctest-modules --cov=evalml --junitxml=test-reports/git-test-other-core-junit.xml --doctest-continue-on-failure --has-minimal-dependencies |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we don't need it for core tests? 🤷♀️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hehe I guess theoretically there shouldn't be too much of a difference in runtime for core-tests that isn't captured in our non-core tests, unless catboost / xgboost / other dependencies are doing something fishy. But touche, por que no when it doesn't cost us much? 😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for fixing my copypasta lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super cool @angela97lin ! Curious what you think the next step is? Maybe you can share some of your scripts you had written a while ago to parse the xmls into more human-readable formats?
Makefile
Outdated
|
||
.PHONY: git-test-automl-core | ||
git-test-automl-core: | ||
pytest evalml/tests/automl_tests evalml/tests/tuner_tests -n 2 --ignore=evalml/tests/automl_tests/dask_tests --durations 100 --timeout 300 --doctest-modules --cov=evalml --junitxml=test-reports/junit.xml --doctest-continue-on-failure --has-minimal-dependencies | ||
pytest evalml/tests/automl_tests evalml/tests/tuner_tests -n 2 --ignore=evalml/tests/automl_tests/dask_tests --durations 100 --timeout 300 --doctest-modules --cov=evalml --junitxml=test-reports/git-test-automl-core-junit.xml --doctest-continue-on-failure --has-minimal-dependencies |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts on setting durations
to 0 so that we record every test? Before, I knew we could do something cool like this I limited durations to 100 because I didn't want to flood the console lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it! 😁
Makefile
Outdated
|
||
.PHONY: git-test-other-core | ||
git-test-other-core: | ||
pytest evalml/tests --ignore evalml/tests/automl_tests/ --ignore evalml/tests/tuner_tests/ --ignore evalml/tests/model_understanding_tests/ -n 2 --durations 100 --timeout 300 --doctest-modules --cov=evalml --junitxml=test-reports/junit.xml --doctest-continue-on-failure --has-minimal-dependencies | ||
pytest evalml/tests --ignore evalml/tests/automl_tests/ --ignore evalml/tests/tuner_tests/ --ignore evalml/tests/model_understanding_tests/ -n 2 --durations 100 --timeout 300 --doctest-modules --cov=evalml --junitxml=test-reports/git-test-other-core-junit.xml --doctest-continue-on-failure --has-minimal-dependencies |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not though?
@freddyaboulton For sure! I filed #2453 with some of my initial scrappy code, but definitely worth having the scripts in the repo so it's accessible to the whole team whenever we want to try to debug unit test runtimes. I also think it could also be super cool if we could, during a job, calculate the difference in time it took to run on that branch vs the current artifact on main and raise a warning / flag if the runtime significantly increased, but that might be asking for a little bit more 😝 |
Closes #2396.
Example of artifact: https://github.com/alteryx/evalml/actions/runs/980439245