-
Notifications
You must be signed in to change notification settings - Fork 987
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use github action to deploy to test.pypi.org and pypi.org #1038
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, some questions below
with: | ||
user: __token__ | ||
password: ${{ secrets.test_pypi_password }} | ||
repository_url: https://test.pypi.org/legacy/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Test PyPI server is new to me. The PyPI guides I'm reading all suggest using it, but don't say why -- will it fail for a malformed distribution or something? Should it be followed with a quick pip install from the test server to verify that the installation works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This link raises a good point about race conditions when the actions for multiple PRs are running in parallel -- I'm not saying one way or the other that we should use skip_existing
here, but just something to be aware of if we get mysterious failures someday.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the best reason to use Test PyPI is to ensure that your distribution workflow (manual or automated) works as you expect it to.
It's hard for me to imagine a situation in which we'd run into the race condition and not want to be alerted to a problem for manual fixes. The action only works on pushes to pvlib/pvlib-python
- it doesn't run on PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I mistakenly thought that push
included pushes to PRs. I'm not 100% satisfied with the documentation on this, but it seems that you are correct that it has to be a push to pvlib/pvlib-python
specifically: https://developer.github.com/v3/activity/event_types/#pushevent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear, the action does run on pushes to forks, but it stops before doing anything meaningful. See screen shot below for a recent history of the actions on my fork.
Unfortunately it does not seem that there is a way for us to disable actions on forks, but forks may choose to disable actions in https://github.com/{username}/pvlib-python/settings/actions
run: python setup.py sdist bdist_wheel | ||
|
||
- name: Publish distribution to Test PyPI | ||
uses: pypa/gh-action-pypi-publish@master |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be tagged to a version of the action instead of tracking the master branch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering the same thing when I copied it from the PyPA guide. That guide also used master for the other actions that we're now pinning (in part copying metpy and other examples). Either way is fine with me.
# fetch all commits and tags so versioneer works | ||
- uses: actions/checkout@v2 | ||
with: | ||
fetch-depth: 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fetching all of history seems a little overkill -- is it worth it to copy metpy's approach? Or maybe fetching is cheap so let's not worry about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't take more than a few seconds for this to run so I think we should keep it simple for now.
Thanks for the feedback @kanderso-nrel. Let's see how this works... |
Fails because pypi won't accept PEP 440 local versions. https://github.com/pvlib/pvlib-python/runs/1058055206?check_suite_focus=true Follow up PR to restrict to tags incoming. |
docs/sphinx/source/api.rst
for API changes.docs/sphinx/source/whatsnew
for all changes. Includes link to the GitHub Issue with:issue:`num`
or this Pull Request with:pull:`num`
. Includes contributor name and/or GitHub username (link with:ghuser:`user`
).I started by trying to add to the Azure CI so that we didn't have to worry about yet another CI service but the Azure documentation on this is just terrible. This workflow is largely modeled on
A nice change here is that all commits will be pushed to test.pypi.org so we can ensure that the distribution is always built as expected. This also gives users another way to get prerelease versions of pvlib at https://test.pypi.org/project/pvlib/
By default, github would also run this action on pushes to forks. None of the attempts to post would be successful because they don't have the secret I added to https://github.com/pvlib/pvlib-python/settings/secrets. But I don't want to unnecessarily clutter users notifications with github action failures, so I added a check
if github.repository == 'pvlib/pvlib-python'
.See this build to confirm that the action builds a wheel and sdist with the right version.
See this build to confirm that the
if github.repository
condition works (or at least prevents running on my fork).I removed the ability for the Travis configuration to deploy the releases. I think this makes the Travis configuration obsolete, but I want to wait to delete it until we know this new workflow works.