-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only test certain modules #1152
base: main
Are you sure you want to change the base?
Conversation
Others like tpch/{spark,polars,duckdb} don't benefit from being regularly tested or like runtime, we don't care about. Originally I was going to add a `pytest.mark.skip` but then I realized that that would make it hard to explicitly run certain tests. I think that instead we want to make CI more specific about what it cares about.
@@ -10,4 +10,4 @@ then | |||
EXTRA_OPTIONS="$EXTRA_OPTIONS --benchmark" | |||
fi | |||
|
|||
python -m pytest $EXTRA_OPTIONS $@ | |||
python -m pytest $EXTRA_OPTIONS $@ tests/{benchmarks,stability,workflows,tpch/test_dask.py} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't this always ignore the scheduled runs of --tpch-non-dask
then?
benchmarks/.github/workflows/tests.yml
Line 110 in 158b07c
- name: Determine if all TPC-H benchmarks should be run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any desire to run non-Dask TPC-H benchmarks on a schedule. Is there some motivation for this? Or is this just something we're doing because historically we tend to run things on a daily basis?
If so, maybe that makes sense for projects that are included in git-tip
because the code used in the benchmarks changes. However, for these projects their software is pinned. I see no reason to run them on any schedule except manually.
Open to disagreement though if people have other thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was only introduced by #1083 sparked from this comment: #1044 (comment) by @fjetter
...I want this to run somewhat regularly (every commit, once a day, etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood. I disagree with that comment.
@fjetter any objection to not running these benchmarks on a regular basis?
Others like tpch/{spark,polars,duckdb} don't benefit from being regularly tested or like runtime, we don't care about.
Originally I was going to add a
pytest.mark.skip
but then I realized that that would make it hard to explicitly run certain tests. I think that instead we want to make CI more specific about what it cares about.