Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

linting as a test (with PR annotations) #5294

Closed
FirefoxMetzger opened this issue Mar 27, 2021 · 4 comments
Closed

linting as a test (with PR annotations) #5294

FirefoxMetzger opened this issue Mar 27, 2021 · 4 comments

Comments

@FirefoxMetzger
Copy link
Contributor

FirefoxMetzger commented Mar 27, 2021

A while ago I've learned that it is now possible to have the CI comment directly on PR lines via GH Actions and I've finally found some free time to dig into it. Once you understand how it's supposed to work, it really isn't complicated at all. As a result, I've set up linting as a test that can fail CI. This is an alternative to a self-updating pep8speaks comment, which I always found hard to keep track of; especially in PRs with longer conversation.

Another QoL feature is that it can make annotations directly on individual lines in a PR, which is awesome because feedback is very contingent. Here is an example of what this looks like (it works over at imageio)

PR-annotation

My question is if scikit-image is interested in something similar.

Edit: Codecov can also do this for coverage.

codecov-example

@hmaarrfk
Copy link
Member

As an more advanced user, I appreciate these comments.

As a novice contributor, i felt those comments were hostile and it made me feel strange to receive a "no" from a bot before hearing back from a human.

Is there a way to manually trigger the comments?

@FirefoxMetzger
Copy link
Contributor Author

@hmaarrfk I know that it is possible to set up actions that can/need to be triggered manually, so you could have the linting in a separate action that can be triggered by members/teams/etc. I know that over at TensorFlow they are doing this because their CI/CD is so big that it's too expensive to run it fully on all PRs. That said, I've never looked into setting this up myself.

As a novice contributor, i felt those comments were hostile and it made me feel strange to receive a "no" from a bot before hearing back from a human.

I'm not sure I get this part. How is it different from failing CI tests? (Assuming getting a "no" from tests is not hostile ofc.)

@FirefoxMetzger
Copy link
Contributor Author

Oh, in the spirit of full disclosure: I just learned that there is a limitation of 10 error and 10 warning annotations per step, and 50 per job. [Source] So I can understand if the annotations feature would still be considered too new to be viable.

I still think that failing CI based on code style can be valuable.

@scikit-image scikit-image locked and limited conversation to collaborators Oct 18, 2021
@grlee77 grlee77 reopened this Feb 20, 2022
@scikit-image scikit-image unlocked this conversation Feb 20, 2022
@stefanv
Copy link
Member

stefanv commented Feb 22, 2023

I think this is a neat idea! I hope we can revisit it again in the future. For now, we've added pre-commit which helps to provide local feedback before the CI fails, and hopefully that significantly reduces the pain for developers. Thanks for looking into this @FirefoxMetzger!

@stefanv stefanv closed this as completed Feb 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants