-
Notifications
You must be signed in to change notification settings - Fork 544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Split tests in two groups by backend #2194
CI: Split tests in two groups by backend #2194
Conversation
@jreback this solves the disk space problem by splitting the tests in two groups of backends. All green, except the conda build build, which is a separate issue (see #2195). I couldn't get this working with spark and omnisci, so those two are not being tested here (I'll create issues to add them back if we merge this). |
yeah create issues for those (can run as separate backends entirely) will look soon |
ci/azure/linux.yml
Outdated
BACKENDS: "clickhouse impala kudu-master kudu-tserver mysql omniscidb parquet postgres sqlite" | ||
# TODO: omniscidb should be in BACKENDS_1, but it's not compatible with Python 3.8, so it needs to be set individually in py36 and py37 builds. | ||
# It is not being added, because the conda solver takes forever to resolve when pymapd is present, so it's not being tested for now | ||
BACKENDS_1: "mysql parquet postgres sqlite" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there someway you can actually put the names rather than backends_1, e.g.
mysql-parquet-postgres-sqlite (e.g. more informative in the displays / code).
i merged the feedstock conda job so in theory this should be all green if you rebase |
I changed the |
|
kk happy to merge this and can proceed? |
thanks @datapythonista |
Testing if splitting tests in this naive way solves the disk space problem. I guess docker images for the unused backends should not be download and storage consumption should be much less.