Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up GitHub Actions for testing and linting #194

Merged
merged 33 commits into from
Jul 27, 2020
Merged

Set up GitHub Actions for testing and linting #194

merged 33 commits into from
Jul 27, 2020

Conversation

leouieda
Copy link
Member

@leouieda leouieda commented Jul 24, 2020

Use Github Actions as a replacement for Azure and TravisCI non-deploy builds. The CI runs faster and was easier to configure. Not using the fatiando/continuous-integration scripts but will migrate as soon as those are ready.

Reminders:

  • Run make format and make check to make sure the code follows the style guide.
  • Add tests for new features or tests that would have caught the bug that you're fixing.
  • Add new public functions/methods/classes to doc/api/index.rst and the base __init__.py file for the package.
  • Write detailed docstrings for all functions/classes/methods. It often helps to design better code if you write the docstrings first.
  • If adding new functionality, add an example to the docstring, gallery, and/or tutorials.
  • Add your full name, affiliation, and ORCID (optional) to the AUTHORS.md file (if you haven't already) in case you'd like to be listed as an author on the Zenodo archive of the next release.

@leouieda leouieda changed the title WIP Set up GitHub Actions for testing and linting Set up GitHub Actions for testing and linting Jul 24, 2020
@leouieda
Copy link
Member Author

I'd appreciate a review by anyone who knows a bit about Actions. I'm pretty happy with the setup. Right now, the only part that could benefit from being a script in our continuous-integration repo is the installing of conda packages. I suspect the deploy to Github pages will also need to be adapted. I would be OK with keeping this as is and updating it when we update the scripts repo.

@leouieda
Copy link
Member Author

Not implemented here:

  • Deploy to test.pypi.org when merge on master. This is a nice way to test our deploy so we don't get surprises when tagging a release. Need to change the versioneer config in setup.cfg from pep440 to pep440-pre otherwise PyPI refuses the upload. Should be done from a single job. Can be done with the PyPI action: https://github.com/marketplace/actions/pypi-publish
  • Deploy to pypi.org on tags. Pretty much the same as above but only runs on tags.
  • Deploy to gh-pages. The actions that exist don't really do what we want most of the times. We could adapt the TravisCI script without much trouble. Don't even need to setup a token the checkout action should allow us to push to the same repository.

All of these should also only be run in this repository, not forks.

Copy link
Member

@hugovk hugovk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good!

branches:
- master
tags:
- v*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I usually find branch restrictions hinder contributions.

For example, if I push a feature branch, the CI won't run.

I won't get early feedback on failures until I create a PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's a bit of a compromise. Otherwise, it feels wasteful to have the CI run twice when you have a PR open (one for the PR and one for the branch). And we also encourage people to open PRs early to get the conversation started and let us know what they're working on.

.github/workflows/continuous-integration.yml Outdated Show resolved Hide resolved
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python: [3.6, 3.7, 3.8]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional: it's also possible to test against the latest 3.9 beta, something the Python release manager strongly encourages maintainers of third-party Python projects to do.

Suggested change
python: [3.6, 3.7, 3.8]
python: [3.6, 3.7, 3.8, 3.9-dev]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only caveat here is that you haver to be able to build all your dependencies. Probably not a big deal for Pooch, though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it available on conda-forge? Or should we use pip and the system Python for this one?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I was referring to system (GHA) Python, not sure about Conda.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it might need to be a separate job using the system Python. Let's leave that for a follow-up PR

Copy link
Contributor

@dopplershift dopplershift left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple thoughts I had while looking it over.

I'm amused since I was beating on this very issue over on MetPy. Looks very similar, other than the conda stuff (which I'm working on today--thanks for confirming how I was thinking of approaching).


# Check code style and linting. Run separately so this finishes fast and
# reports problems right away.
style_checks:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to consider a workflow like this one that matplotlib uses for style checks. You end up with comments inline with the source. I have not yet set it up myself, but it's on the short list.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be useful for the flake8 run for sure! I'll have a look at that. It would be better to the have the style checks as a separate workflow. It kind of sucks that we can't rerun failing builds only on Actions.

coverage xml

- name: Build the documentation
run: make -C doc clean all
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a step to upload the built docs as an artifact for the build?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought of doing that for the dist and docs and then having another workflow do the deploying. But the problem is I don't want to deploy if any of the checks fail and couldn't find a way to make one workflow depend on another. But it might be useful to have the docs at least for offline checking I guess.

@dopplershift
Copy link
Contributor

One other thing I thought I'd throw out there: with your requirements set up like this, you could move to pinning the versions in the files and have dependabot manage updating them. This solves the problem of CI breaking due to new versions from upstream.

Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
@leouieda
Copy link
Member Author

@dopplershift do you mean pinning to specific versions of each package? We also use the requirements file to set dependencies in setup.py so that might not be ideal. But that could be separated if it makes life easier.

@leouieda
Copy link
Member Author

@dopplershift @hugovk thanks for the reviews and suggestions! I made some changes here based on them:

  • Move the style checks to a separate workflow since they don't reuse anything really from the test workflow. Run each tool in a separate job so that we can immediately see which one failed. Not using reviewdog yet but will look into that in the future.
  • Add caching for conda packages. It speeds up the build a bit but I suspect it will be even more for the other Fatiando tools with heavier dependencies.
  • Upload the docs as artifacts so we can deploy from a separate job (and check them if jobs are failing)
  • Read all dependencies from requirements*.txt files so we can hash them for the conda caching. Also makes it easier to include optional dependencies in the future (include them in requirements-optional.txt instead of having to mess with the CI config).
  • Fix some minor mistakes from before.

@dopplershift thanks for the pointer to the matplotlib and metpy configuration 👍🏽 Shamelessly copied some things over here.

I looked at what you're doing in MetPy with dependabot. It's nice to have PRs opened automatically to test new releases. Right now, we're using the requirements files in the repo root for multiple purposes:

  1. Communicate our dependencies
  2. Install things on CI
  3. Populate setup.py (actually not in this repo but it's just because I forgot)

What I like about that is that adding a new dependency means changing a single file. For the dependabot workflow to work, we would have to keep duplicated versions of this information. I'll have to think a bit about that but it might be worth it.

@leouieda leouieda closed this Jul 27, 2020
@leouieda leouieda reopened this Jul 27, 2020
@leouieda
Copy link
Member Author

Merging this in now. See #195 for notes on follow up improvements.

@leouieda leouieda merged commit 21ae4ec into master Jul 27, 2020
@leouieda leouieda deleted the actions branch July 27, 2020 11:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants