-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAINT/TST: Add GitHub actions tests for single workflow cross-platform testing #1576
Conversation
It looks like this is only running on one thread, so it's no faster, but this is how it would work and indeeed the same thing on my laptop runs several times faster |
In that case, I'd leave it as it is now. I see that you are also experimenting with Actions. Any reasonable success with that? That would allow us to test on mac. |
So far: yes and no. Let me test a bit more and see what I can do.
…On Fri, Aug 21, 2020, 6:18 AM Martin Fleischmann ***@***.***> wrote:
It looks like this is only running on one thread, so it's no faster
In that case, I'd leave it as it is now.
I see that you are also experimenting with Actions. Any reasonable success
with that? That would allow us to test on mac.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1576 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANMPP2Q6PCBVE6MUM3JDELSBZJZ3ANCNFSM4QGWJGAQ>
.
|
Well I've got macos working great. Still struggling with env caching on windows: https://github.com/adriangb/geopandas/actions/runs/218669355 Once that's working, you should be able to test all platforms using a single provider and workflow file, and all runs will finish in < 5 min. |
@martinfleis can you take a look at this? You won't be able to see the runs here, but you can see them at https://github.com/adriangb/geopandas/actions Windows still takes a long time (~12 min), I'm not sure if there's a way around that, but failure happens in this order:
Most of the time, tests will fail on Ubuntu and so the PR author will feedback within 5 min. The linting also now runs the actual pre-commit instead of running hardcoded |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I quite like this (not sure about the willingness of others to switch from Travis/AppVeyor to GHA). Few questions.
macOS is not running PyGEOS tests, is that an intention? I think it should.
DEV is not using master versions of pandas
, shapely
and matplotlib
as it currently does on Travis (right?). Can we add it as it is now (and allow a failure)? It is quite important to catch changes in pandas master. This is also the slowest test on Travis, and I assume it will be the slowest here as well (does cache help with pip installs from git?)
Lines 45 to 47 in ca7c95e
- if [ "$DEV" ]; then pip install git+https://github.com/pydata/pandas.git; fi | |
- if [ "$DEV" ]; then pip install git+https://github.com/matplotlib/matplotlib.git; fi | |
- if [ "$DEV" ]; then pip install git+https://github.com/Toblerity/Shapely.git; fi |
Failure in one stops all the other?
.github/workflows/tests.yaml
Outdated
# Increase this value to reset cache if etc/example-environment.yml has not changed | ||
CACHE_NUMBER: 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this work in practice? When does it fetch new versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's just there so that you can force a cache refresh (ex if some package did some funky stuff with their pypi uploads). You would edit this yaml and bump that number.
Added
I can add that. I think the cache won't help much with pip+git installs because I think pip always re-installs.
Yep |
So I was able to get QGIS to install on windows via chocolatey (which comes pre-installed on GitHub runners), but I'm at a loss as to how to get the Any input @brendan-ward? This would be nice because it would also add testing with pyges on windows |
You are installing QGIS to make pygeos work? What about conda-forge version? I think that we can use pygeos from conda-forge for most of our environments at this point and keep git version in the dev one. That should resolve windows issue I guess. |
So I think I have everything working now. This isn't any faster than Travis, but at least you get to test MacOS and Windows on the same provider. I think it could be a bit faster if we are able to get mamba + caching working (https://github.com/goanpeca/setup-miniconda/issues/49), but I think this is as far as I'm going to take this for now unless you decide to go all in on GitHub Actions @martinfleis |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for this, I see that it took a while. Let me try to sum up pros and cons of this change:
Pros:
- all test on a single provider
- testing macOS (which is a big plus)
- quick and easy linting
- much quicker response in the PR in case of a fail
Cons:
- no colours in the output (harder to spot fails/skips)
- it is not immediately clear that dev failed (but it is in annotations)
I have also added some comments but wait with any changes before we'll know the willingness of others (mainly @jorisvandenbossche) to switch CI. I do like Travis (I do not like AppVeyor), but I feel that this might be actually better, especially since everything is at one place. It also makes maintenance easier, because I can restart Travis but can't AppVeyor (I think only Joris has rights - with GHA we would not have to think about them).
.github/workflows/tests.yaml
Outdated
run: | | ||
if ${{ matrix.dev }} != true | ||
then | ||
pytest -v --cov=geopandas --cov-report xml -r s --numprocesses=auto || (echo "RUN FAILED, RUNNING SEQUENTIALLY FOR DEBUGGING" && pytest -v -r s && exit 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly happens here? You ran tests once and if it fails you run it again with a verbose traceback? So if some of the tests fails, it will run them all twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My intention was to put in -x
on the first run, which would make it skip to the second when it hits the first failure. This was mainly to satisfy any concerns that the output from pytest-xdist would not be clean enough. But realistically looking at the outputs I think it's identical and it would be cleaner to do the single parallelized run.
I got colors working! Apparently it's just that pytest fails to auto-detect GHA as supporting ANSI colors, but adding the I re-arranged some things and implemented your suggestions. I also tested that indeed the PostGIS tests will fail if any of the tests get skipped. Will using Runtimes are now:
|
All postgis test are in |
A couple new changes:
|
@adriangb thanks a lot for exploring this! Personally, I am also quite happy with Travis (and similarly as Martin, less so with Appveyor ;)), so I don't necessarily feel an urge to move (they have served us well for a long time). But I suppose it is also partly "old habits" (and maybe for people not used to it, Travis might be less welcoming compared to github actions), and having support for the different OSes using a single service is of course a clear plus. Now, some general comments:
|
We can make the build fail or pass, but there's no "allowed failure" like travis has. Other builds will not stop/fail for that build and you can mark that check as not required for merging. So effectively you have to chose to (1) mark as passed and get green checks everywhere or (2) mark as failed and get red checks everywhere. I figure they're going to fix this at some point: actions/runner#2347
I separated it so that we can specifically check if the tests are skipped, which rules out the situation (which I faced while developing this) where the postgis setup failed and hence the tests skipped, marking that as a pass even though we didn't test what we wanted to.
I'll have to check on that. If it isn't I can add it easily since it should be the same as |
They're not, but that is not a huge issue. On Travis they're not either, you have to find them in the log, so an indication "hey, there's a failure" is enough. If we're going to GHA, then we should have everything in one place. I am not a fan of options 2 and 3. If we leave dev on Travis, it still shows ✔️ on PR and you have to manually open it to see.
I'd be personally fine with the (forthcoming, assuming fix in conda action) situation where the only annotation is the existing one marking the failure. If it could list those failed tests, even better, but not critical. |
env: | ||
USE_PYGEOS: 0 | ||
run: | | ||
pytest -v -r s -n auto --color=yes --cov=geopandas --cov-append --cov-report term-missing --cov-report xml geopandas/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Above you mentioned at some point that it doesn't actually run faster on CI with pytest-xdist? If that's the case, I would just leave out the -n auto
(we can still recommend it for local usage in the contributing docs, where it makes an actual difference)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here it does help. But not too much since most of the time is spent in conda anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here it does help. But not too much since most of the time is spent in conda anyway.
Do you know how much? (it halves the time spent in the tests?)
Just asking, because if it is not much, I would maybe rather leave it out to avoid additional complexity (or strange issues from running tests in parallel). In which case we can of course still recommend it for local use in the contributing guide
Yes, assuming that the other warnings will go away, then those annotations are indeed quite OK. We could also just require a green build for now, and see how that goes (after fixing the current failures -> #1588). Eg more quickly skip a test (or mark one as xfail) on the dev build if we can't directly fix it. |
@jorisvandenbossche so I think I've resolved all of the issues :). Here's a sample run: https://github.com/adriangb/geopandas/actions/runs/227850192 |
It looks really nice now. Even the annotations are clean at the moment and show only those failures in dev CI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the update!
- uses: actions/checkout@v2 | ||
|
||
- name: Setup Conda | ||
uses: s-weigand/setup-conda@v1.0.4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason you switched to a different action?
(the one of goanpeca now moved to an conda repo, so seems like an "official" supported one: https://github.com/conda-incubator/setup-miniconda)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only because of those warnings. This one seemed to do the job without any warnings (and is easier than setting up conda manually). I figure we can always switch back whenever those warnings are fixed (or if GitHub fixes the UI for marking as allowed failure).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one also sets the pre-installed conda, which makes it faster, the incubator one downloads latest miniconda3. TL;DR I'd stick with this one for CIs but bare in mind that future issues may be due to an outdated version of conda.
I just fixed the conflicts. I also just realized that I set this up as a flip the switch type things but I guess you should be able to have Travis and GHA running at the same time for a bit just to make sure everything is okay. I guess you could even direct all CI efforts like #1617 towards GHA to really stress test it and let Travis live on (1) a backup and (2) until it starts failing. The difference to this PR is minimal, you would just not delete the travis files I think. |
@adriangb thanks for the update, and sorry for forgetting about one. Let's move forward with it! Good point about that we could have all of them in parallel for a little while. But, it might not exactly simply adding back the travis/appveyor files, as you also updated some other things that were used in there (although that is maybe limited). Any case, I am also fine with just doing the switch I think. |
I am fine keeping GHA only. |
@adriangb thanks a lot for your work on this! |
Happy to help! It looks like the tests themselves passed. A couple of followup items:
coverage:
status:
project:
default:
target: 95% # the required coverage value
threshold: 1% # the leniency in hitting the target |
Thanks a lot @adriangb! I think that the next step will be switch to
@jorisvandenbossche I guess you are the only one with proper access rights for this. @adriangb unless you are working on codecov stuff (resulting in PR), can you open an issue for that? |
I removed the webhook for appveyor now, I suppose that should be sufficient |
No description provided.