Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate purcell #4154

Open
purcell opened this issue Aug 16, 2016 · 18 comments
Open

Automate purcell #4154

purcell opened this issue Aug 16, 2016 · 18 comments
Labels

Comments

@purcell
Copy link
Member

purcell commented Aug 16, 2016

I'm getting burnt out on MELPA, or rather my time for it is increasingly limited. I hope that both authors and the general level of package quality on MELPA have benefited from there being someone available to provide code reviews and - occasionally - turn away packages that aren't sufficiently novel or widely useful.

So my question is: if I scale back my time commitment, how can we continue to keep MELPA awesome? Here are some of my thoughts:

There are some basic packaging issues (malformed headers etc., redundant or deprecated :files specs) which don't get caught by our Travis build. We should fail faster and with clear information, so that we can simply refer contributors to the failing build output. This might involve having a "pendantic" setting in package-build, to escalate some of what are currently debug messages about such issues. package-build-current-recipe should also fail noisily in such cases, to head off submission of not-really-working-well recipes.

Some packaging-specific issues (e.g. cl-lib deps, long description lines) can already be detected by the code in flycheck-package. Further work is needed for packaging-related issues such as using setq-local or advice-add but not declaring a corresponding dependency such as (emacs "24.4").

Other packaging issues span files, e.g. including company or helm backends in a package which doesn't declare a package dependency on the corresponding package. Those checks can't be built into flycheck-package. I imagine a separate package linter which flycheck-package could use.

There are a ton of other stylistic and code quality issues I routinely check for when reviewing upstream code, e.g. inconsistent symbol prefixes, crappy docstrings, missing autoloads, empty commentaries etc. There needs to be a great linter for these problems too, probably combined with the package linter mentioned above.

Additionally, even after recipes get merged, code or packaging issues can arise later, so it'd be great to provide the same detailed feedback about packages on an ongoing basis: it could be generated each time we build them.

Other issues such as whether a package adds to the ecosystem are a bit more subjective: we need to be asking authors, "how is this different to packages X & Y?", and "could you work with Z to add this functionality directly there?" I have the advantage of having reviewed thousands of packages at this point, so I can make mental connections, but it's a useful thing which we should continue to do.

It probably makes sense for me to write an exhaustive checklist of all the stuff I do when reviewing packages, but I'm also keen to get input from others.

@swsnr
Copy link
Contributor

swsnr commented Aug 17, 2016

@purcell I don't think it's likely that we can get a good Emacs Lisp linter quickly, before you burn out entirely 😁 Linting is hard, the more the "softer" quality criteria gets. No Emacs Lisp linter will ever be able to check for "good" docstrings.

So what about more reviewers for MELPA? Much more? I'm pushing Flycheck into the same direction, for the same reasons, and we're now using protected branches and LGTM to distribute and formalize the approval process of pull requests. We can now liberally hand out push access to the repo without fearing breakage in master and easily scale to multiple maintainers and pull request approvers, with better onboarding as no maintainer has the sole responsibility so new maintainers can safely review and learn from seasoned maintainers. In Flycheck this has somewhat deeply changed how I review pull requests and how "safe" I feel to delegate my work. I think @fommil has a similar workflow for Ensime, with good experiences as well.

I guess there's quite a lot of seasoned Emacs folks who'd be willing to occasionally review pull requests for MELPA (I for my part would), so you could slowly push MELPA into a community-governed project while still keeping all admin privileges in your hands.

Maybe that's the quicker way to get work off your shoulders, and it's quite fast to set up. The biggest part would be to compile a review checklist for all those new reviewers.

@purcell
Copy link
Member Author

purcell commented Aug 18, 2016

Yes, so maybe if I write up a standards guide, and/or a reviewer's handbook, then we can start to move to a thumbs-up powered system with consensus across reviewers. (Hmm... it'd be great if there were a service which let you add an interactive checklist directly to issues/PRs...)

@swsnr
Copy link
Contributor

swsnr commented Aug 18, 2016

@purcell Yep I missed that one too. There's a mention bot from Facebook which sort of do that but it's not that straightforward. It's easy to write such a bot though, GitHub's hooks are simple.

That said you can still start to have additional maintainers even if there's no guide as they could review together with you initially and learn what to look for.

@fommil
Copy link
Contributor

fommil commented Aug 18, 2016

You mean like

  • github
  • has

@swsnr
Copy link
Contributor

swsnr commented Aug 18, 2016

@fommil Yup, that'd be awesome to automatically post review checklists as comments. You can sort of abuse facebook/mention-bot for this but it's not nice. Maybe if I find time in a long winter's night in three months or so I'd write a bot, but that's another one for my growing and already huge list of things I'd like to do if I had the time 😉

@fommil
Copy link
Contributor

fommil commented Aug 18, 2016

You could use the template? Most people don't read it or change it. (Cynicism kicking in)

@swsnr
Copy link
Contributor

swsnr commented Aug 18, 2016

@fommil Maybe, but that's not particularly nice either, and in any case, this topic is just a small technicality in this discussion. We can continue it elsewhere, but I'd not like to distract from the actual topic too much.

@fommil
Copy link
Contributor

fommil commented Aug 18, 2016

Did you see the (controversial) doc I wrote on medium about how we organise everything in ensime? Some parts are relevant. I definitely recommend lgtm.co and using github checkers in general. It might be worth looking into forking lgtm to support whatever workflow you want.

@swsnr
Copy link
Contributor

swsnr commented Aug 18, 2016

I've read it. As I said above we're using LGTM already.

@alphapapa
Copy link
Contributor

@purcell Steve, let me say thanks for your work on MELPA. Until I submitted a package of my own, I had no idea how much work you put in behind the scenes. I think your curation work is surely underrated and underrecognized, probably because of it being out-of-view for MELPA users. I don't think MELPA would be the treasure that it is without your dedication.

At this time, I can't make a commitment to do a certain amount of work per week for MELPA, but when I come across new Emacs packages that look interesting, I try to check them out and give some feedback in the vein you usually do (e.g. pointing out let* when I see nested lets, asking for better descriptions and examples, how packages compare with existing ones, etc). I'd be glad to continue doing that on an irregular basis if it helped reduce your workload, although I don't have nearly the expertise you have.

I can't imagine using Emacs without MELPA. For your sake and ours, we must keep things running smoothly. :)

@lassik
Copy link
Contributor

lassik commented Feb 1, 2019

Just wanted to stop by and give another thanks @purcell. You are making it incredibly easy and fun to contribute to MELPA and are doing a great job with the constructive feedback. How do you feel about the workload and motivation nowadays - does it feel sustainable and did you maintainers come up with any new ideas to help long-term?

@purcell
Copy link
Member Author

purcell commented Feb 1, 2019

Thanks @lassik. Notwithstanding the fact that I was just bedridden for a week with flu, I'm still quite over-committed, so I'm resigned to just checking in on MELPA whenever I can, and batch-processing all the open PRs then: this is on a cadence that currently varies from 1-4 weeks, so the 4 week end of that range is a bit crappy for contributors. In the meantime, I've been grateful to @alphapapa, @riscy and others for jumping in and providing earlier feedback to contributors, which often greatly reduces my effort later.

Long-term, it'd be good to build more smarts into the travis build: the github checks API should allow us to run package-lint on submissions, for example, to confirm GPL licensing, and to see a list of included files, all of which would make reviews faster and provide quicker feedback to submitters.

@riscy
Copy link
Member

riscy commented Mar 2, 2020

I've been working on a bundle of automated checking scripts that you can find here which I'm calling "melpazoid" but maybe that name is too goofy.

The idea is mainly to be able to do something like these:

RECIPE='(shx :repo "riscy/shx-for-emacs" :fetcher github)' make
MELPA_PR_URL=https://github.com/melpa/melpa/pull/6718 make

for a user or review to run some checks on the package.

These checks include the (dangerous) task of byte-compiling the files. Byte-compiling is done inside a Docker container (not running as root!), and before the Docker container runs, its network access is switched off. I'm by no means a security expert, but that felt like a good measure.

Today I experimented with adding it to MELPA (you can see the branch here). You can see the Travis output here (normally the red lines would cause a build failure, but on this branch the checks can't fail because of the || true). The Travis build becomes heavy due to the reliance on Docker, but it also means nobody has to maintain a "travis" version and a "local" version of melpazoid.

There are a number of additional checks that need to be handled aswritten up by @purcell also in #6714, but maybe this is a good direction.

I thought I'd mention these points here to get some feedback!

@riscy
Copy link
Member

riscy commented Mar 2, 2020

Also for a nontrivial example, here is the build against magit which has many files:

https://travis-ci.org/melpa/melpa/builds/657104500

@purcell
Copy link
Member Author

purcell commented Mar 2, 2020

Very nice! I'm quite keen to switch us over to Github Actions, which generally runs a bit faster and can show more granular check output in the PR itself. Ideally we'll have easy visibility directly in the PR of which checks have passed and failed.

First step would be to switch the existing build over, I think, then we could add extra build steps. That's easy apart from needing a non-Travis version of https://github.com/melpa/melpa/blob/master/travis-changed-files.

@purcell
Copy link
Member Author

purcell commented Mar 2, 2020

I don't know how users could re-trigger builds, though, when they've changed upstream code to pass checks, but the recipe was already fine. Admins can still do it.

@purcell
Copy link
Member Author

purcell commented Mar 2, 2020

Ultimately I'd quite like to have the checks re-executed as part of the package build itself, and results shown on the package pages. This would not necessarily be to abort package builds, but more to make public the check status for each package. If done only for rebuilt packages, it's computationally cheap enough.

@riscy
Copy link
Member

riscy commented Mar 2, 2020

This would not necessarily be to abort package builds, but more to make public the check status for each package.

This would be awesome! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants