New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add script to test pull request #1685
Conversation
And for added brownie points, it runs under Python 2 or 3. |
I would remove 3.1 from the tests, because I don't think we should officially be supporting it. |
We're still testing it on ShiningPanda, and I fixed a bug with it the other week. I'm planning to drop it from the tests after 0.13. This skips any versions that you don't have installed, so if you don't have 3.1, it doesn't cause any problems. |
okay, fair enough. I was only thinking that I don't want the "unavailable: 3.1" message to give the impressions that 3.1 is supported, because I don't think it should be. |
'Supported' is a bit of a fuzzy boundary when we don't offer paid support - people are welcome to ask us on the mailing list, but there's no guarantee that we can or will solve their problems. But it's still effectively supported, because we're making sure that it works. |
And that's precisely what I think we should stop doing. 3.1 is buggy and not maintained, and if we run across a bug in 3.1, I do not think we should work around it. But as you said, it makes sense to stick with it until 0.13, since it requires ~no work at this point. After that release, I think we should halt testing on 3.1, and make it clear that our answer for errors in 3.1 is to upgrade to 3.2. |
Nice, this script will be very useful. Any chance of including the tail of the report (specifically packages [un]available) for each version, even if it succeeds? And maybe some platform info? Or even a pip freeze. For instance, it wouldn't be particularly informative to test #1687 in the absence of pymongo. |
Yep, that should be simple to add. |
OK, so now the output looks like this:
Unfortunately the indentation doesn't work in Markdown: Test results for commit e076c69 merged into master |
Now it can post the results as a comment on the PR, like this. |
Slick! What more do you need to add, then before this should be merged? |
Possibly some command line options, such as "don't post results to github". And it would be good, but not essential, to be able to test an unmergeable branch (i.e. check it out, rather than merging it). It could probably also use some cleanup, the code's a bit rough and ready at the moment. I won't be able to do any more until tomorrow evening UK time, so you're welcome to carry on with it in the meantime. I should have mentioned in the last comment - it now requires requests and keyring to be installed. |
OK, it can now handle pull requests that don't merge cleanly - it runs the tests without attempting to merge them. Tested with #1690 again. |
Awesome. I think it's a good idea for the gist upload to be optional, and off by default. That should change the keyring dependency to only when uploading. |
The gist upload works without keyring, as it's done anonymously - it's posting the comment on the pull request that requires keyring. I can add the option to disable either or both of those, although I'm reluctant to add too much complexity. |
I'm sorry - I meant that the whole reporting-the-result should be optional (comment + gist). I don't think there is a need to separate the two. But I think the script is still useful if it only runs locally, without publishing a report (i.e. so I can use it to test my own PRs and fix them, without cluttering the comment area with test runs). Related: Since you are using auth to post a comment, why not use the same auth for the gist? |
At present, just because I wrote the gist-posting code before I added auth. But I'll adapt it to post gists using your user ID. |
OK, I've added a |
Thanks! I would switch it around, so that it doesn't upload/comment unless you ask it to, rather than uploading unless you ask it not to. |
I considered that, but if you ran it and forgot the flag, you'd have to run all the tests again to make it post the results. This way, forgetting the flag doesn't lose anything, and accidentally posting a comment isn't a big deal. We could store the results so you can post them afterwards without re-running the tests, but that's extra complexity in what's supposed to be a relatively simple tool. |
Fair enough - I'd rather accidentally fail to post than accidentally post, but I'll defer to you, as the author. |
Well, I'm open to discussion, especially if there's a good way of accommodating both use cases. But failing to post is annoying if the only simple way to post is another 10 minute test run (or a load of copying and pasting to post manually). I'll add a little extra complexity so we can pickle the results and have another script that posts them without running the tests again. Then I'm happy for the default to be not to post. |
If it's easy enough for run & post to be separate steps, that would be great, but if it starts to look too complex, don't worry about it. It's already a great tool! |
OK, now the default is just to run the tests and save the results. To post the results, there's a |
Awesome, thanks! Anything more to do, or is it ready for merge? |
I think I'll merge this now so people can start using it. We can always make more changes to it later. |
Add script to test pull request
Add script to test pull request
Run this with a pull request number, and it will:
Still to do: