Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add midstream bindist proposal #61

Merged
merged 3 commits into from Apr 23, 2024

Conversation

hasufell
Copy link
Contributor

@hasufell hasufell commented Dec 1, 2023

Comment on lines 180 to 181
This might require a full time developer for at least half a year,
as well as help from volunteers.
Copy link

@ffaf1 ffaf1 Dec 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would the costs be once the initial setup is working? Will paid devs still be needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is continuous effort, so ultimately yes.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good, then this should be in the budget section.

Comment on lines +111 to +112
Meanwhile, cabal upstream still has not finished their backport due to issues
with hackage dependencies: https://github.com/haskell/cabal/pull/9457

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW there are no plans for a 3.6 or 3.8 patch release

https://mail.haskell.org/pipermail/cabal-devel/2023-November/010578.html

I am greateful for your patch release 3.6.2.0-p1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this highlights different priorities.

GHCup does not need to care about hackage dependencies etc and can just build an ad-hoc release with a backport, whether or not that exists on hackage.

Copy link

@michaelpj michaelpj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with many of the problems listed in this proposal, and I think it would be great if the HF funded some work to make things better. The variant proposal that I would like to make is that:

  1. We invest the money in a person or team who works on the upstream bindists (and the automation surrounding them)
  2. We work together to come up with policies for distribution-related concerns such as platform support that work for everyone, backed up with support from the manpower we got from 1

I think this has most of the advantages of the current proposal, and has additional advantages in:

  1. Not duplicating work building bindists and doing automation work
  2. Benefiting other packagers who use upstream bindists

The main drawback of my proposal is its second plank: we would have to work together to agree on what we are going to do. The current proposal allows the "installation experience" team (which looks to me like the GHCup team, so ultimately @hasufell) to just decide. I think we should be able to work together and get a better outcome, especially if we have someone who is going to do the implementation work, which I think would be a great way of taking the sting out of these discussions. Certainly it seems a shame to let social obstacles cause us to spend a significant sum of money in an inefficient way.


However, using upstream bindists directly is extremely rare in the Linux world of distribution. Most distributions
build, package, test and curate binary packages themselves, not only because they have custom formats, but for
reasons of control, trust and quality.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add also for reasons of relationship. Most packagers for linux distributions do not have close relationships with the upstream projects. They don't generally have the option of working with upstream to get better bindists.

This is very different for ghcup, which is tightly embedded in the Haskell ecosystem and has a small set of tools that are distributed. In this setting it seems much more plausible to me to work with upstream to improve their bindists, rather than having two sets.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most packagers for linux distributions do not have close relationships with the upstream projects.

I'm not sure. I've sent hundreds of build system patches to upstream projects during my Gentoo days and even got full commit rights to some of them.

And yet, we never used their bindists (because Gentoo is a source distro). So we focussed on what was actually relevant: improving their build systems, so we have an easier time packaging.

## Background

Historically, installers like GHCup and stack have used upstream bindists for mainly one reason: it's easy to do
so and doesn't require further efforts.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a pretty key point. ghcup has historically been a volunteer-run project, and so building all the bindists for all the tools would have been an unreasonable amount of work. Historically therefore the effort has been to push work upstream, with upstream projects building bindists for ghcup and contributing to ghcup-metadata.


* https://github.com/haskell/ghcup-metadata/pull/127#issuecomment-1766020410

These issues are frequent and so far the GHCup developers used to single handedly fix all those missing bindists manually
Copy link

@michaelpj michaelpj Dec 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bracketing the quality issue for now, it seems to me that there is an issue of decision-making and control. That is, who decides what platform tool X supports?

Specifically, it seems like we sometimes have a situation like:

  • Tool X does not want to support platform Y
  • GHCup wants tool X to support platform Y

If upstream gets to decide what platforms it supports, it makes no sense to talk about GHCup "fixing" "missing" bindists. Upstream doesn't support it, you can be annoyed about that but it's their decision. The current situation where GHCup decides to add platforms that upstream does not want to support is risky: upstream will not be testing on those platforms nor likely to be responsive to issues on those platforms.

I'm sure it is not unheard of for packagers to add platform support, particularly if adding support doesn't require much modification to what upstream provides. But again, it seems strange to me in the context of GHCup, where there is a much closer relationship between the packager and upstream.

There certainly is an issue that Haskell tooling as a whole is inconsistent and wavering on what platforms it supports. I do think it would be good to have a shared policy on platform support that was changed more transparently. And again, since we are in a situation where our distributor (GHCup) has a close relationship with upstream, GHCup is certainly a stakeholder in platform support discussions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If upstream gets to decide what platforms it supports, it makes no sense to talk about GHCup "fixing" "missing" bindists. Upstream doesn't support it, you can be annoyed about that but it's their decision. The current situation where GHCup decides to add platforms that upstream does not want to support is risky: upstream will not be testing on those platforms nor likely to be responsive to issues on those platforms.

Yes, this proposal includes the testing of bindists (including unofficial ones). The idea would be to:

  • run the tests in CI and upload the results somewhere for end users to see (whether there are test failures or not)
  • have a mechanism to report these results to upstream developers
  • communicate test results to end-users in a compact way
  • but more importantly: have a way to run the entire test suites for all tools on the end-users system (which doesn't even work for GHC correctly... this is where the test suite must pass)

There certainly is an issue that Haskell tooling as a whole is inconsistent and wavering on what platforms it supports. I do think it would be good to have a shared policy on platform support that was changed more transparently. And again, since we are in a situation where our distributor (GHCup) has a close relationship with upstream, GHCup is certainly a stakeholder in platform support discussions.

So far, GHC developers have not asked the community about any of their platform decisions (dropping FreeBSD or armv7), nor were GHCup developers consulted. There was no call for help.

And yet, the unix package still supports these platforms, because there even have been OpenBSD users opening bug reports to make unix buildable again. There are clearly different perceptions and goals here and I've been communicating these for the past 2 years with zero improvement in platform support. So no, I do not believe that any other approach works, other than doing it downstream/midstream.

I have much more anecdotal evidence that there are widely diverging goals and perceptions, but I don't think these will aid to give more clarity to the proposal.

* https://github.com/haskell/cabal/issues/7950

This shows that bindists, for current and historical versions, need continuous maintenance. However, upstream developers
so far have very rarely engaged in this type of maintenance work, pushing it down to GHCup. As an example, here are all the

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're going to invest this work, why not do it on the side of fixing the upstream bindists, given that we're working within the Haskell community? Reading this with my outsider hat on, this sounds crazy. Someone in the community is fixing the bindists but not pushing them upstream?

Additionally, fixing upstream bindists has a much bigger impact than just on GHCup, because upstream's bindists are used by other packagers, e.g. linux distributions, nixpkgs, etc. GHCup is not the only packager to end up in the situation of accumulating patches that somehow don't quite get upstream, see e.g. haskell.nix's patch list. Directing additional effort to getting this stuff properly pulled upstream benefits everyone distributing GHC and other tools.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Someone in the community is fixing the bindists but not pushing them upstream?

I tried to get i386 alpine bindists into GHC: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5213

After 2 years, it still hasn't happened yet (the issue got stuck while Ben was debugging some issues with statically linked GHCs, which I did not request at all).


Instead with the proposed framework, I believe that upstream developers would actually benefit from this, because:

  • they don't have to build releases at all anymore
  • if they still build releases, they can focus on smaller coverage (e.g. not build for every esoteric linux distro)
  • test results for "unsupported" or tier3 platforms can flow back to upstream developers, allowing ad-hoc fixes even if there's no continuous GHC CI support for said platform

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, fixing upstream bindists has a much bigger impact than just on GHCup, because upstream's bindists are used by other packagers, e.g. linux distributions, nixpkgs, etc.

FWIW I think all the packagers you mention build their own binary distribution, they don't repackage the upstream bindist.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, fixing upstream bindists has a much bigger impact than just on GHCup, because upstream's bindists are used by other packagers, e.g. linux distributions, nixpkgs, etc.

Yeah, I totally missed that. It's mostly incorrect. Linux distributions either don't use upstream bindists at all or only for bootstrapping.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think you're right. Nixpkgs at least only uses them for bootstrapping. That's not an unimportant use-case, though, and you certainly have a problem if your bootstrapping bindists don't work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not an unimportant use-case, though, and you certainly have a problem if your bootstrapping bindists don't work.

But for them it really doesn't matter where the bootstrap bindists come from and given that GHCup supports more platforms and architectures, they can just use those.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At that point it seems like we've removed all reason for upstream to bother producing bindists at all. Maybe we should say that - it would remove some repeat labour. But if nothing else it's probably good for upstream to at least notice some packaging problems quickly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, but that's up to upstream. We don't want to tell them what to do.

GHC and other tools have in the past dropped support for certain platforms either entirely or requested
the community to step up and do the work (e.g. on GHC CI).

E.g. [GHC ARMv7 support was dropped silently without any call for help](https://gitlab.haskell.org/ghc/ghc/-/issues/21177#note_470440). Similarly, FreeBSD support just ceased to exist when the GHC FreeBSD CI stopped working. Later the community asked for a [revival](https://gitlab.haskell.org/groups/ghc/-/epics/5), but nothing signifcant has happened so far.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The linked issue notes three (maybe four?) people who are interested, none of whom have cared enough to follow up except for @hasufell . My perception has been that the person who mostly wants FreeBSD support is @hasufell , and this doesn't convince me that there's much community demand.

Copy link
Contributor Author

@hasufell hasufell Dec 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a chicken and egg problem: is the demand low, because the tooling is totally busted and end-users just use a linux VM?

Are windows users at 15%, because the tooling experience is great or because they switch to WSL and don't bother?

Why don't we just drop windows?

Again: I do not subscribe to this approach. We clearly have different goals/perceptions here.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What approach do you think we should use for deciding what platforms to support?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point is to have any approach at all.

My approach would be very conservative. Look, we still support i386, but suddenly decided to drop armv7? Despite clear evidence that there are still a lot of devices out there that run on armv7.

Similarly, FreeBSD is by far the most popular BSD. It was dropped not because there are inherent issues with supporting it code-wise, but due to CI shenanigans.

So we could start with:

  • don't suddenly drop platforms, ever
  • if a platform is difficult to support, evaluate what about it precisely is the issue
    • if it's CI -> there is a full-time devops working on GHC
    • if it's resources -> ask HF
    • if it's expertise (that's actually more a problem on windows) -> ask for help, find more devs
  • if none of the above worked, communicate the intent to drop the platform to the community and ask for volunteer help
  • if no one steps up, drop the platform in ~3 major releases time (1.5 years)


### GHC nightlies

As a special case, I want to point out that GHC nightlies have been frequently broken beyond repair:
Copy link

@michaelpj michaelpj Dec 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Were GHC nightlies advertised as anything other than a provisional feature? If not, I don't see the problem with them having problems or low availability to begin with.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, but afaik, nightlies were funded (not sure whether by HF or as a courtesy of WT), but the project as a whole can be considered failed, since availability and platform issues were not designed properly.

The GHCup developers had spent a non-trivial amount of time to prepare ghcup for this feature (implementation), discuss the user interface design, help with testing and advertise it to the public, just to see that it was all botched in the end.

This did not reinforce my motivation to work on such joint projects in the future, when upstream (GHC) can just call the entire thing "off". So doing this midstream seems much more appealing.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not saying the nightlies went well, I'm just not sure how it supports your argument.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It supports my argument in so far that we have indication that GHCup and some upstream projects have very different ideas of:

  • what is a PoC deliverable
  • what is "best effort"/provisional

This has become even more clear with the last HLS release.


## Technical Content

We propose here to create a joint project of "installation experience" developers to get funding and maintain

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Joint between whom? I don't see any others listed, except I guess stack can install GHC. If there isn't going to be any work improving upstream, then it sounds like this is primarily a GHCup project.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, GHCup, stack and possibly the stable-haskell project.

It is great to know that e.g. the test suite passes on GHC CI, but that may have little value in different environments.

Additionally, issues with tests can flow back to upstream developers and we may develop workflows and processes to streamline
this type of feedback. Early release candidates can assist with this workflow.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

* https://github.com/haskell/cabal/issues/9461
* https://github.com/haskell/haskell-language-server/issues/3878

GHCup will also require to implement revisions to make updated bindists feasible: https://github.com/haskell/ghcup-hs/issues/361

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

* https://gitlab.haskell.org/ghc/ghc/-/issues/22727

The idea is that bindists should be primarily tested **on the users system**, because that is where they're going to run.
It is great to know that e.g. the test suite passes on GHC CI, but that may have little value in different environments.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@hasufell
Copy link
Contributor Author

hasufell commented Dec 1, 2023

I agree with many of the problems listed in this proposal, and I think it would be great if the HF funded some work to
make things better. The variant proposal that I would like to make is that:

We invest the money in a person or team who works on the upstream bindists (and the automation surrounding them)
We work together to come up with policies for distribution-related concerns such as platform support that work for > everyone, backed up with support from the manpower we got from 1

@michaelpj I'm aware that this is an alternative. This is what I have basically been doing the past 2-3 years. Being in touch with every upstream dev team, doing their CIs (partly), meetings, etc.

This is far more work than building your own CI!

Just as an example: the recent HLS 2.5.0.0 discussion cost me several days of headache and more energy than just doing the entire release myself would have.

Human interaction is expensive. Aligning goals is difficult. Distributors and upstreams may not have the same goals and priorities (see my backporting example) and the recent HLS discussion is a very strong indicator that this is the case right now.

The entire point of this proposal is indeed for GHCup (and other bindist consumers) to be:

  • more independent of upstream
  • able to have different priorities (platform support, architecture support, manual release testing, ....)
  • able to do security backports swiftly (for possibly discontinued versions)

If you feel you want to raise a proposal to improve upstream release CIs, I think you're well in your rights to do that. But this is not what the intention of this proposal is.

@michaelpj
Copy link

This is what I have basically been doing the past 2-3 years. Being in touch with every upstream dev team, doing their CIs (partly), meetings, etc.

This is far more work than building your own CI!

The point of this proposal is to ask for a paid person to do some work, no? I agree it's unreasonable to ask you to do all this, but if we're going to hire someone then we have a lot more options.

The entire point of this proposal is indeed for GHCup (and other bindist consumers) to be ...

I argued some of this inline, but I think basically I'm unconvinced:

  • Independence isn't valuable in and of itself
  • I don't see why we should need to have different priorities, pretty much everything you've listed is also a thing that upstream cares about. Usually they just don't want to do the work, which is again mitigated by having a paid person.
  • I don't see why GHCup needs to do security releases, or do anything other than marking insecure releases as insecure, as many other distributors do

It seems to me that you want GHCup to be more independent of upstream because it's been hard to agree with upstream. I would rather we solved that problem, and I think if we had someone who was paid to do the work in this space (rather than a variety of people all of whom would really rather not do the work), then that would make things significantly easier.

If you feel you want to raise a proposal to improve upstream release CIs, I think you're well in your rights to do that. But this is not what the intention of this proposal is.

I could make another proposal (and maybe I will), but I think you've articulated a lot of the problems well and I do think that what I propose would solve the same technical problems.

@hasufell
Copy link
Contributor Author

hasufell commented Dec 1, 2023

The point of this proposal is to ask for a paid person to do some work, no? I agree it's unreasonable to ask you to do all this, but if we're going to hire someone then we have a lot more options.

You certainly do, but then I won't be involved in those other options. What I'm proposing here is happening anyway, but it may be a short venture/experiment. That remains to be seen. Without funding (at least infrastructure) it may not last long.

Independence isn't valuable in and of itself

That depends on how collaborative the projects are you're working with and how close they share visions. The other alternative would be to give full control of all release CIs to GHCup developers. But that is unlikely to happen. And I'm not sure that's really better.

Independence is good to also be able to have clear boundaries and allow diverging goals. We have tried the other way and it utterly failed, in my opinion (the discussions I had with the Cabal team about switching the under-maintained gitlab CI back to github is another such instance and has been dragging on for a year now or more).

I think you are vastly underestimating the amount of time people have invested in trying exactly what you're proposing and having failed utterly, because of constantly diverging ideas, goals, philosophies, understandings of "provisional" etc.

I have no appetite or interest to revive these attempts and I will make sure to protect the boundaries of the GHCup project.

I don't see why we should need to have different priorities, pretty much everything you've listed is also a thing that upstream cares about.

I think my inline comments show that this is not the case.

I don't see why GHCup needs to do security releases

It doesn't have to, if:

  • upstream acknowledges that a certain version/branch is still important for distributors
  • backports security patches swiftly
  • security vulnerabilities are disclosed properly after all relevant stakeholders were informed

None of that was the case in this instance, causing time pressure to deliver a backport.

It seems to me that you want GHCup to be more independent of upstream because it's been hard to agree with upstream. I would rather we solved that problem, and I think if we had someone who was paid to do the work in this space (rather than a variety of people all of whom would really rather not do the work), then that would make things significantly easier.

I feel you're contradicting yourself. First you say we share all goals, then you say it's hard to agree. Something does not add up. And I think I've provided sufficient anecdotal evidence that the goals are indeed not the same.

So I don't see this going forward.

I could make another proposal (and maybe I will), but I think you've articulated a lot of the problems well and I do think that what I propose would solve the same technical problems.

Yes, please open another proposal. I'm also wondering who will drive it and has sufficient understanding of all the upstream projects, as well as GHCup, stack etc.

You're asking for a project manager. The last time I tried that, I got mostly confused replies, because it touched too many things at the same time and it was unclear what was the motivation or the goal: #48

I don't think we're in a position where this can work. Right now, I'd rather solve real world problems, instead of weaving visions about how good communication in the Haskell community could look like.

@gbaz
Copy link
Collaborator

gbaz commented Dec 1, 2023

I think this proposal is identifying a real issue, but I'm not sure its all the way to a solution.

One complexity is that ghcup isn't just "a distribution" but its sort of "a melange of distributions". Typically a software project would provide testing on some platforms, and mainly source-only releases (with perhaps a few binary reference releases, or binary releases specifically on os x and windows), and then it would be up to packagers for debian, etc. to manage things through distro specific channels. The plethora of linux distributions combined with the slowness of their update process against the rapid pace of ghc and cabal releases means that the fully traditional model does not work -- users need newer versions of software than comes with their distribution, and they also need the flexibility to pick amongst different versions of ghc to match the project they are working on.

This induces the need for something like ghcup, to provide a more flexible and frequently updated collection of releases than would come through a single distro channel (and which is far superior than the now defunct haskell platform, architecturally, in this regard, because it is a single tool across multiple releases, rather than an installer-per-release model).

But as the proposal notes, on top of that, we then get the pressure to not just package upstream releases, but also do the work that would happen through specific distro release channels of handling backports, ensuring builds are compatible wtih appropriate static libraries per-platform (tinfo and the like), and soforth.

And the proposal is also quite right that this work can't all get pushed upstream, where release managers are necessarily more focused on ensuring bugs are fixed, deadlines are hit for cutoff, CI is run, etc. And machinery for CI testing is, despite our ardent desires, subject to different uses and constraints than machinery for build projection.

All of which motivates some form of midstream bindists as described here, and as something which has existed for some time, provided both by ghcup maintainers and also by the stack team.

However, "midstream bindists" as such is not a clear proposal, nor is it clear what six months of resources would be used to build that does not now exist.

On top of that, we would need a policy for which platforms these bindists are supported for, and with what constraints. For example, static alpine/386 builds are broken, I believe, for issues which are not easily fixable outside of some serious work by ghchq. A downstream packager can't resolve that, anymore than it could substitute if ghchq just dropped windows support. So to the extent it isn't just builds/packaging but platform support we necessarily need to rely on what the upstream developers are able to support, and that needs to be explicit in the policies.

Further, to the extent it is just builds/packaging, we need some criteria on what is and isn't worth supporting and with what window. For example, in some of the tickets linked, a big bone of contention is freebsd. So if this project is supported, what are the total platforms its intended to cover, in what configurations? And are those platforms that the haskell foundation as such thinks are the core/correct ones.

And finally, if we have a list of criteria and platforms for which we intend to support midstream bindists more formally -- then what is the proposed outcome here? An engineer is paid six months to do what? Create specific CI infrastructure that does not now exist? What's the architecture, what needs to be done on top of the infra that already exists for midstream bindists? Will this CI cost continuing money? Where does that money come from? After that time, do we expect this infra will be stable, or will it need constant attention, and if so, where does the funding for that constant attention come from? All of which is to say I think the motivation is pretty good, but the technical content section really needs some fleshing out and detail.

Meanwhile, under budget, it suggests "The Haskell Foundation could start with supporting CI infrastructure and see how far volunteer efforts go." I imagine this support of infrastructure would need to be ongoing, not time delimited. I would like to see the proposal start there -- enumerating which machines would need to be supported indefinitely, at what yearly cost. I think that a proposal specifically for that as a first step would be the clearest, easiest to sort through and discuss, and have the greatest chance of success.

@hasufell
Copy link
Contributor Author

hasufell commented Dec 2, 2023

@gbaz thanks for the thorough review... I'll just drop my thoughts here and amend the proposal text later, after I've gathered more feedback


However, "midstream bindists" as such is not a clear proposal, nor is it clear what six months of resources would be used to build that does not now exist.

Yes exactly. I've been doing the "closing the gap" work for free. That's why people are rightly confused why I'm bringing this up at all.

chocolatey btw has also done this, because GHC bindists have been largely insufficient on windows and needed manual fixes here and there.
That's why releases on chocolatey also have not been "prompt".

So what six months of this proposal would achieve is very clear:

  • funding and actually supporting the people/projects that do that work instead of burning them out and requiring to coordinate all upstream releases
  • ensuring that good bindist work continues at all: this is not a given, it may stop at some point and ghcup may degrade to just an installer with no vision
  • a stable process and maintenance of providing bindists (new and existing ones) and the ability to react to new requirements
    • say: a new distro doesn't work well with existing bindists... this requires ghcup to recognize the new distro and execute CI runs for historical versions to produce such a new bindist, then create metadata "revisions" (TBD) for those GHC versions
    • introduction of totally new platforms: e.g. NetBSD

You're investing in a maintenance model, not simply an end product.

On top of that, we would need a policy for which platforms these bindists are supported for

Any that the project has the capacity for. The test suites will be run, the results uploaded for visibility and shared with GHC HQ.
If a bindist doesn't pass the dogfood test (we can build ghcup with said bindist) or is otherwise unreliable,
then it may be dropped from the default channel. But given that the GHC test suite is incredibly flaky, a few
failing tests are hardly evidence that the binary distribution is broken (try to run the packaged test suite against an official GHC bindist on your local machine).

We may provide mechanisms that allow users to easier distinguish bindists where all tests pass and such where that's not the case.

There's a possible design space that can be explored and needs funding.

GHCup itself will not run forks of GHC to keep a platform alive that doesn't work correctly anymore. But it may package such forks/patchsests (e.g. as separate distribution channel).

We do not want to mislead users. That is one of the main policies. (we already provide experimental bindists in the prerelease and cross channels)

static alpine/386 builds are broken

And GHCup has never requested or built such bindists. This is exactly the problem that I'm trying to solve here: instead of waiting 2 years for GHC to merge my PR which was about dynamic i386 alpine bindists, it ended up as a follow-up PR that tried to add static i386 alpine bindists, which I have zero stakes in.

Further, to the extent it is just builds/packaging, we need some criteria on what is and isn't worth supporting and with what window. For example, in some of the tickets linked, a big bone of contention is freebsd. So if this project is supported, what are the total platforms its intended to cover, in what configurations? And are those platforms that the haskell foundation as such thinks are the core/correct ones.

I think the proposal text already lists a couple of those that have completely vanished from upstream (or were never there):

  • i386 alpine linux
  • armv7 linux
  • x86_64 FreeBSD

However, I think I need to be more explicit here: I do not want the Haskell Foundation to dictate or decide on what specific platforms are to be supported. If that is the case, then I'll withdraw this proposal. I want this project to have autonomy and execute the vision described in it.

what is the proposed outcome here?

As the proposal text says: reliable, continuously maintained bindists, readily available.

What clarification do you desire?

An engineer is paid six months to do what?

  • implement revisions for GHCup (this is hard)
  • have a central GitHub repository with private runners for MacOS aarch64 and Linux aarch64 that can build every haskell toolchain tool
  • expand support for uncommon platforms (especially for historical GHC versions)
  • curate existing bindists (e.g. update old bindists that are linked against ancient ncurses and are starting to fail to install on newer ubuntu versions)
  • contribute to upstream build systems to make them more generic and design and implement ergonomic interfaces to build release binaries in a "sanctioned" way
  • contribute ideas, requirements and patches to upstream to support running test suites ad-hoc on the end-users system (see ghcup test)
  • design a process to report test failures back to upstream
  • implement nightlies for the entire haskell toolchain with permanent storage

This is more than 6 months of work.

What's the architecture

That's up to the discretion of the implementors. I'm not sure there's much value discussing GitHub actions code. I also don't have any concrete design in mind. I have the goals in mind.

Will this CI cost continuing money?

Yes.

Where does that money come from?

The Haskell Foundation.

After that time, do we expect this infra will be stable, or will it need constant attention, and if so, where does the funding for that constant attention come from?

It will require continuous work, unless upstream would provide a stable interface to building release binaries, which is unlikely to happen, even if we contribute to the idea of a common interface to build release binaries. Changes to github runners will require attention. New distro versions will require attention. And so forth.

The funding also comes from the Haskell Foundation. I'm a bit confused why you keep asking. This is the entire point of the tech-proposals, is it not? We're not asking for permission, we're asking for support. It is up to the Haskell Foundation to decide whether this is something valuable to support.

That is also why my suggestion is to start small and "just" support us with infrastructure.

This had already been discussed previously in a smaller context. You're probably aware that there are already private GitHub runners in the Haskell org that are used by GHCup and HLS release CI, as well as packages like unix, bytestring etc. We did ask for support to expand this. There was no formal agreement back then, but it appeared pretty uncontroversial.

The initial support for this proposal would overlap with that.

Meanwhile, under budget, it suggests "The Haskell Foundation could start with supporting CI infrastructure and see how far volunteer efforts go." I imagine this support of infrastructure would need to be ongoing, not time delimited. I would like to see the proposal start there -- enumerating which machines would need to be supported indefinitely, at what yearly cost.

That we can do.

@gbaz
Copy link
Collaborator

gbaz commented Dec 2, 2023

However, I think I need to be more explicit here: I do not want the Haskell Foundation to dictate or decide on what specific platforms are to be supported. If that is the case, then I'll withdraw this proposal. I want this project to have autonomy and execute the vision described in it.

Ok, I think that's fine. But there should be both an initial target set explicitly listed or arches and OSes, and a "vision" sufficient to act as a guideline for which arches and OSes would be considered, with what priority. I.e. if you don't want a full explicit list, then the vision/criteria/priority function for the autonomous choices should be much clearer.

The funding also comes from the Haskell Foundation. I'm a bit confused why you keep asking. This is the entire point of the tech-proposals, is it not? We're not asking for permission, we're asking for support. It is up to the Haskell Foundation to decide whether this is something valuable to support.

In terms of why I'm pushing for this, what the HF needs to approve is indeed giving support, not permission. But that support needs to be clearly explicitly enumerated. That means saying that you are asking for indefinite HF support for servers, estimated at a specific monthly cost, so that can be approved in a budget.

Similarly, for the "second part" of this proposal (assuming it is restructured to start small and only estimate the server cost for now), we will need more clarity on the amount asked and intention for the "one-time" engineering cost. You are asking for up to a specific dollar amount for an engineer. So we need that estimated dollar amount so it can be approved. And we need a sense if that is a realistic amount -- do you have an engineer in mind? If not, by what criteria do we imagine that's a reasonable amount. And if not, also, then how are we getting that engineer? Are you asking for the resources of HF to recruit this engineer as well? My hope would be that if you can cohere a volunteer working group (would you ask for nonmonetary HF support in recruiting and kickstarting that btw?) then nailing down who might proceed to do what work at what expense could emerge. Part of this would probably be nailing down a candidate architecture and estimate of the work more concretely as well. I certainly recognize why you would say "I also don't have any concrete design in mind. I have the goals in mind." -- but as we move towards a plan to act on those goals, we'll need that design built out, to ensure that its a feasible thing to fund.

@gbaz
Copy link
Collaborator

gbaz commented Dec 2, 2023

With regards to "armv7 linux" specifically, you've written that "GHCup itself will not run forks of GHC to keep a platform alive that doesn't work correctly anymore. But it may package such forks/patchsests (e.g. as separate distribution channel)."

However, I believe that GHC has actually dropped support for armv7? (cf: https://gitlab.haskell.org/ghc/ghc/-/issues/21177) Apologies if this is not true -- I agree it is way too hard to keep up on what is and isn't supported. I think the platform tiers list is supposed to cover this, but my sense it is it is kept roughly accurate for tier 1, but not very accurate for tier 2 (https://gitlab.haskell.org/ghc/ghc/-/wikis/platforms/) and perhaps we need to help ghchq come up with better coordination and centralization for tier 2 support. I would be very interested in thoughts from @chreekat @bgamari and @mpickering in this regard -- I think the HF could really end up helping here in playing a coordinating role between core ghchq maintainers and volunteer community maintainers from various platforms.

@angerman
Copy link

angerman commented Dec 3, 2023

FWIW, we can build armv7 Haskell code with at least 9.6 fairly ok. We will need to support 32bit (and we don't really do that well across the ecosystem), due to WASM and JavaScript anyway. There is still a large set of armv7 handsets in use, so there is some demand for that platform, even if only cross compiled.

@gbaz
Copy link
Collaborator

gbaz commented Dec 4, 2023

The development of this discussion seems to indicate to me that it would generally reduce friction and frustration to have clear policies and centralized communication and decision-making over tier-1 and tier-2 platform support across ghc, cabal, and hls, at least. And further, as part of this, if we could distinguish between tier-2-no-promise-of-binary-release and tier-2-no-expectation-it-works.

In one of the side discussions, the policy proposed is largely about not dropping binary releases for platforms unexpectedly. I don't think this is the right approach. In particular, I don't think dropping binary releases for a less used platform should be our primary concern -- users can continue to use the last released version which they were already using. So of course we should try to support the platforms we feasibly can, but if we can't, then it won't necessarily be disruptive if a few platforms stall for a while. Communicating about platform support should be an important goal, as should be not whipsawing around unstably between supported platforms.

However, we should have positive criteria by which we try to determine which platforms it is most important to support, perhaps coupled to some measure of overall usage and user-demand. These positive criteria may give a list of platforms too wide to support. However, they should at least give a sense of what desired goals would be, and also indicate which platforms fall short and would be the most reasonable to consider to drop support for over time.

Both of the above points seem somewhat out of scope for this proposal, but I think they get at addressing some of the underling frictions which motivate it.

@hasufell
Copy link
Contributor Author

hasufell commented Dec 5, 2023

In one of the side discussions, the policy proposed is largely about not dropping binary releases for platforms unexpectedly. I don't think this is the right approach. In particular, I don't think dropping binary releases for a less used platform should be our primary concern -- users can continue to use the last released version which they were already using. So of course we should try to support the platforms we feasibly can, but if we can't, then it won't necessarily be disruptive if a few platforms stall for a while. Communicating about platform support should be an important goal, as should be not whipsawing around unstably between supported platforms.

Unfortunately, this doesn't work for GHCup.

GHCup was initially developed to unify installation expectations across platforms (e.g. linux and darwin). So a team that wants to follow the 'recommended' tag, should be able to expect a similar experience across all platforms and tools.

That doesn't work if there's no installation artifact for FreeBSD for the 'recommended' GHC version.

Having different recommended versions per platform compromises the consistency goal: also imagine a course of 200 students, all getting slightly different toolchains.

It has happened that upstream seeks GHCup to bump the 'recommended' version, but did not provide all the bindists that the current recommended version does.

So whatever policy e.g. GHC HQ comes up with, I doubt it will help the GHCup project much.

However, we should have positive criteria by which we try to determine which platforms it is most important to support, perhaps coupled to some measure of overall usage and user-demand.

This is a chicken and egg problem again. Are there no users because demand is low or because the toolchain experience is awful?

Windows users can switch to WSL2 to circumvent the grief that comes with handling Haskell on windows. FreeBSD users probably just compile from source or use Linuxulator. But all of these have issues (especially WSL).

I doubt that we can get good metrics that help us determine exactly whether investing in platform XY has a good cost-benefit ratio under the goal of increasing Haskell adoption.

For profit oriented companies, these decisions are easier to make: if they don't have clients that are interested in platform XY and no one is explicitly funding it and it's becoming somewhat of a maintenance burden... then drop it. As such, their perception also is biased towards their clientbase, unless there's evidence for a large unchartered market.

The Haskell Foundation doesn't live by the latter principles I believe and so should be the buffer zone that ensures that boring platforms get their spot.

All in all I'm really reluctant to get more specific or provide more evidence of use cases, user base strength etc, because of the chicken and egg problem: companies might just pick another language after investing some time to understand the state of platform support and cross compilation in Haskell. It is not amazing at all.

@LaurentRDC
Copy link
Contributor

Thank you for writing this proposal. I appreciate that is it very clear and concise.

The is no reason the installation experience / quality assurance, backporting story, and supported platforms are different for each foundational piece of Haskell development, other than the fact that each project is independently managed for historical reasons.

GHC, stack/cabal, and HLS form the de-facto Haskell development platform, and I consider GHCup to be the de-facto standard way to install these things. If you squint a little and see the GHCup project as the 'installation experience' project, then it only makes sense that GHCup manages what it provides for users.

@michaelpj writes:

I think we should be able to work together and get a better outcome, especially if we have someone who is going to do the implementation work

I want to believe, I really do, but coordination between 4 major projects is difficult. Isn't this why the Haskell Foundation was created, to support cross-cutting concerns like this proposal?

To me, the crux of the problem is something that @michaelpj mentioned: who's going to decide whether platform X should be supported for GHC, cabal, stack, and HLS? @hasufell has been vocal about FreeBSD support; how would you envision dealing with users complaining of problems in a binary artifact that isn't supported by the GHC team?

@hasufell
Copy link
Contributor Author

hasufell commented Dec 5, 2023

how would you envision dealing with users complaining of problems in a binary artifact that isn't supported by the GHC team?

In the end, this is open source: people can write patches to fix FreeBSD stuff, even if GHC HQ doesn't "explicitly" support it and submit them upstream. They can open bug reports and see if any other FreeBSD users have solutions. GHC HQ said "someone needs to step up". If someone builds binaries and others are testing them and submitting bug reports, then that is "stepping up" in my book.

Afair (anyone correct me if I am wrong) GHC HQ had the expectation that "stepping up" includes someone maintaining the CI portion on GHC gitlab for FreeBSD. I find that in fact an unrealistic proposition and not friendly towards potential contributors.

Is that a sub-par situation? Yes. Is that a reason to not build binaries? No.

@gbaz
Copy link
Collaborator

gbaz commented Dec 5, 2023

So whatever policy e.g. GHC HQ comes up with, I doubt it will help the GHCup project much.

Right -- I was not proposing a policy for ghcup. I was expressing what I would hope would factor into a policy set by ghchq. That said, there is no currently proposed policy for ghcup, or for this project, and again, I would urge that such a policy be described in some way. I tend to agree that "current haskell users demanding it" is subject to the so-called chicken/egg problem. But perhaps that could be weighted along with "general platform deployment quantity and significance" or the like? Maybe what we want is a list of the following form:

"A platform should be considered for consistent inclusion if any of the following holds: A) a significant quantity of existing haskell users request it. B) it is an architecture with a large [greater than x] install base. C) it is an architecture with significant industrial support or usage in a specific important domain. D) etc...."

It also seems to me that one mismatch here is most upstream tools have some notion (maybe not as formal as ghc) of tier 1 vs tier 2 platforms, and ghcup uniquely does not. Even if ghcup's tier 1 platforms are expanded relative to those from ghchq, and kept more stable, I would still imagine/hope that an experimental/tier-2 layer could be of some use?

@hasufell
Copy link
Contributor Author

hasufell commented Dec 6, 2023

In GHCup terms, tier1 means "we can build bindists" (and it passes basic sanity).

There's not much more to it.

An interesting problem is if there are large release quality differences between platforms.

That happened with GHC 8.10.7 when aarch64 darwin got support through llvm. That impacts the quality of the 'recommended' tag and those are very wild times for distributors. I opened this issue to allow for better mechanisms of informing users of issues specific to a bindist.

But I don't see how that is relevant to the current proposal.

If upstreams want to create policies for their projects wrt platform support, they're welcome. I don't think it impacts this proposal much though. I will keep supporting FreeBSD as long as it works. But that's just one of many points of the proposal.

I don't want to get drowned in coordination issues with upstreams again. A joint "management" is not the only solution and I doubt the haskell tooling landscape has the attitude to welcome such a structure (which I'm saying without judgement). Clear boundaries and precise and timely communication at the junction points is another approach.

I'm very certain that GHCup can fulfill its mission best with a certain amount of independence.

@gbaz
Copy link
Collaborator

gbaz commented Dec 6, 2023

"In GHCup terms, tier1 means "we can build bindists" (and it passes basic sanity)."

Tier 1 is platforms for which support is necessary for a release, tier 2 is best effort. Ghcup doesn't have a distinction, or at least a formal distinction. So if anything gets added to ghcup, then implicitly there's a promise for best-effort support indefinitely. That seems like it removes some potential for flexibility.

"I don't want to get drowned in coordination issues with upstreams again."

I am not proposing that -- what I am asking is that if you say "trust me, I'll come have criteria by which I pick which platforms are prioritized and supported in which way" then you enumerate the criteria with examples. I think this proposal stands a far better chance if it is about platforms by certain criteria as opposed to "some platforms, but we don't know which, and there's no criteria that could help you guess which".

@hasufell
Copy link
Contributor Author

hasufell commented Dec 16, 2023

@gbaz I have updated the proposal with:

@gbaz
Copy link
Collaborator

gbaz commented Dec 16, 2023

Thanks! These are very helpful additions. Let me add that we discussed this proposal at the tech working group meeting today (unfortunately before these additions) and we all agreed you had helped to articulate some very serious issues that need systematic attention, and some genuine "process blockers" we need to not allow to stand in the way of forward progress. I think we want to have some followup discussion on the exact ways to improve things -- and this will be ways, not a single way, because there are lots of areas for improvement. This will involve talking to various teams to understand their capacities and blockers and needs in more detail, so I unfortunately can't be much more concrete beyond that.

@hasufell
Copy link
Contributor Author

Well, I think this proposal is ready and will not see significant changes from my side. So I'm asking the HF to vote on it.

Copy link

@ffaf1 ffaf1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for having added a detailed budget for phase 1, it looks much better.

Copy link

@michaelpj michaelpj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have anything more to add

@jmct
Copy link
Contributor

jmct commented Dec 20, 2023

@hasufell thanks for your work on this!

As there won't be any more large changes from your end, I will take this and discuss with the TWG as well as figure out how this might work within our budgetary outlook.

If any questions arise, or if I need more details, I'll let you know.

@jmct
Copy link
Contributor

jmct commented Mar 20, 2024

The HF has reached an agreement with @hasufell that addresses some of the points of this proposal by funding/providing build infrastructure, while leaving other aspects as part of a potential future proposal.

After discussing it with the TWG I feel that there are a few possible paths forward:

  • We archive the proposal as-is, leaving it as something that can be referenced in the future by another proposal.

  • We reduce the scope of this proposal to fit what @hasufell and I have agreed to. (EDIT: I removed some editorialization that was potentially making things unclear.)

  • We reduce the scope of the proposal by removing the aspects that the HF and @hasufell have agreed to. This would mean removing the cost of runners, but keeping the proposal for the improved "installation experience", the proposed timeline and the request for funding to pay a developer to work on it.

My personal impression is that the first choice is the most prudent, let's archive this, so that it can still be referenced and adapted in the future, acknowledging that the proposal is not being considered as-is. That said, @hasufell might have a different view (or maybe there's another path forward I didn't consider).

@hasufell
Copy link
Contributor Author

@jmct the proposal text already mentions:

As such, we propose the "phase 1" of this proposal that focuses on advancing the existing GitHub CI support for the GitHub Haskell organization, spearheaded by @angerman (providing aarch64 linux and darwin M1 self-hosted machines). This infrastructure is already used by GHCup, HLS and other projects and needs more runners.

And:

Phase 2 and 3 may need follow-up proposals with more concrete implementation strategy, cost analysis and potentially hiring a full-time employee.

Is anything unclear about it?

@jmct
Copy link
Contributor

jmct commented Mar 20, 2024

My apologies, I'll try to be more precise.

Currently the proposal requests a full time developer and a larger budget, so at the very least those things would have to be amended as the HF has not committed to funding or contracting a full time developer. If you do amend it in this way, that's basically my second bullet point.

You're correct that the proposal already states that Phase 2 and Phase 3 would likely need further proposals, that remains true. If you still wanted to propose funding a full time dev to work on any of the Phases, as well as outline the future goals, that would essentially be my third bullet point, and the TWG would have to discuss the amended proposal.

Whether you want to amend the proposal is up to you.

@Tritlo
Copy link

Tritlo commented Mar 20, 2024

As a side note, I think we should be vary of proposals with phases. What does it mean to accept a propsal with three phases, but only provide funding for one? Each phase should be a separate proposal.

@hasufell
Copy link
Contributor Author

I'm a bit confused.

The proposal (maybe less clear than intended), says that right now only phase 1 is to be considered as an immediate goal with funding. And I was told in private that this had been accepted.

The proposal further (again, maybe less clear than intended), lays out phase 2 and 3, which are part of the whole train of thought/motivation and phase 1 doesn't really make sense without them (the spirit, at least), although they may be delayed indefinitely.

As such, I was expecting the TWG to

  • agree with the motivation and the problem statement
  • agree that all phases are theoretically desirable (given infinite resources)... this is non-binding, but if HF here indicates they have other opinions, that's a very important piece of information for me
  • agree that phase 1 will get concrete support

As such, I don't really understand the proposed revisions. The only thing I can see that needs clarification is:

  • phase 2 and 3 are not to be funded right now, but absolutely need to be mentioned here
  • the budget portion about the concrete CI cost should be clearly marked as an example? (TWG wanted more precise numbers)

@jmct
Copy link
Contributor

jmct commented Mar 20, 2024

There's clearly been a miscommunication somewhere, I'm sorry about that.

The HF has agreed to support build infrastructure that aligns with phase 1 of the proposal. The goal has been to have HF support for critical Haskell infrastructure, which we consider GHCUp to be.

I had the impression that this support addressed the majority of your concerns, and that while your ideal would be to have a funded developer working on this (as the proposal states), having access to HF supported build infrastructure was sufficient for now.

I'm happy to try and sort it out here, or hop on a call (we can then write up the results of the call here).

@hasufell
Copy link
Contributor Author

hasufell commented Mar 20, 2024

The HF has agreed to support build infrastructure that aligns with phase 1 of the proposal. The goal has been to have HF support for critical Haskell infrastructure, which we consider GHCUp to be.

Yes, I'm aware, as stated above.

I had the impression that this support addressed the majority of your concerns, and that while your ideal would be to have a funded developer working on this (as the proposal states), having access to HF supported build infrastructure was sufficient for now.

Correct.


As such I expected this proposal to be accepted and merged and I'm confused why it wouldn't, given that it does only ask for funding of phase 1 in the budget section.

I can clarify the wording and restructure the sections, if that helps, but I'm starting to wonder if phase 2 and 3 is maybe deemed undesirable by the TWG, even given infinite resources. Were those phases discussed at all?

This proposal being formally accepted is important to me, because it means HF acknowledges the structural problems that the problem statement outlines and aligns with the mission of GHCup.

@gbaz
Copy link
Collaborator

gbaz commented Mar 20, 2024

You write that you were expecting the TWG to

agree that all phases are theoretically desirable (given infinite resources)... this is non-binding, but if HF here indicates they have other opinions, that's a very important piece of information for me

I don't think we're in a position to take non-binding, non-actionable decisions. A key part of accepting a proposal is in evaluating the specific plans in terms of feasibility, both technically and in terms of resources. All we can do about things not in such a state yet is to individually state our disposition.

Speaking for myself, I think as a future direction of travel both parts 2 and 3 look promising. But the reason we asked for part 1 to be accepted first was that it was the only part that had enough specifics nailed down to be reasonably decided on. At the TWG meeting we agreed with part 1, and felt that it was best handled by delegating to the HF rather than accepting directly, since there's a lot of details that will change in the course of implementation. So our decision to not just vote on it was basically expediency -- no reason for a lot of process over something already underway with lots of fiddly details subject to change.

Parts 2 and 3 are promising, and I personally look favorably on them as individual goals, but we didn't have in-depth discussion on them, because the details are not all worked out. If you would like we can dig in further and have a more serious discussion at our next meeting, which we then relate back here. However, process-wise, I don't think we're inclined to take non-binding votes on such things.

@hasufell
Copy link
Contributor Author

Well, given that I took the time to write this proposal I think I have the right to ask for a formal vote. Otherwise I consider the tech proposal process broken.

I will adjust the proposal text to make clear phase 2 and 3 are "distant ideas" that help understand the motivation, but are not part of the proposal proper.

@gbaz
Copy link
Collaborator

gbaz commented Mar 21, 2024

Sure, I'd be happy to vote yes on phase 1, if it is cleaned up to more precisely reflect the settled-on-work already partially underway, and I think the committee as a whole would feel likewise!

We were only trying to save you the overhead of further cleanups to the proposal, but if you want to take it on, then that's no problem in my opinion.

@hasufell
Copy link
Contributor Author

I updated the proposal to move phase 2 and 3 to "future work" section and rewrote the timeline and budget section.

And reworks the budget section.
@mpickering
Copy link

I think it would be useful if the proposal addressed the issue of governance. The proposal is quite implicit at the moment about who and what is being funded to perform the work. Since the HF want to fund this work and endorse a change in the status quo when it comes to who is responsible for producing artifacts for distribution I think it would be useful to be explicit about this.

Perhaps some questions which would be helpful to consider would be:

  • Who is responsible for producing the midstream bindists?
  • How are decisions about the platforms supported by midstream bindists made and documented?
  • How does someone become involved in the decision making process of producing midstream bindists?

Being clear about the answers to these questions will help the project become more sustainable into the future. Decoupling the production of bindists from the respective projects seems like it will allow project maintainers to concentrate on core development tasks rather than difficulties in producing releases.

Since decisions about platform support and distribution of the toolchain affect all Haskell users I would expect that the Haskell Foundation would be interested in establishing answers to questions like these when it comes to funding a project like this.

@hasufell
Copy link
Contributor Author

Who is responsible for producing the midstream bindists?

The GHCup project.

How are decisions about the platforms supported by midstream bindists made and documented?

We plan to:

Decisions are reached through internal consensus.

How does someone become involved in the decision making process of producing midstream bindists?

Through demonstrating they share the same values as the GHCup project and make contributions that drive forward these values.

There is no formal process (similar to GHC HQ). It is open source anarchy that is driven by communication, sharing similar values and building trust.

@mpickering
Copy link

Thanks @hasufell that's useful. Is there a page which lists who is part of the GHCUp project and involved in the decision making process?

Given the importance of the distribution of the toolchain it seems important that all the stakeholders know who is involved in making key decisions and how to influence that process if they have an opinion (even if they do not have a direct say).

@hasufell
Copy link
Contributor Author

which lists who is part of the GHCUp project and involved in the decision making process?

@jmct
Copy link
Contributor

jmct commented Mar 25, 2024

Thanks @hasufell for these clarifications, this is shaping up nicely!

These private runners will be made available to the whole Haskell GitHub org and as such benefit other projects there as well (like HLS, Cabal, bytestring, etc.).

Forgive my ignorance on this, but the process here is that you can make the runners available to the entire GitHub Org via Github's "self-hosted runner" mechanism. Is there anything that needs to be done for the individual projects under the Haskell Org to get access? My understanding is 'no', but I want to make sure.

@hasufell
Copy link
Contributor Author

Is there anything that needs to be done for the individual projects under the Haskell Org to get access?

Oh yes. There is a potential for misuse, so we enable the runners selectively for repositories.

@hasufell
Copy link
Contributor Author

I was told the vote was carried out this Friday. I would prefer the communication regarding the decision to happen here in public.

@jmct can you give an update here?

@jmct
Copy link
Contributor

jmct commented Apr 21, 2024

Yes, the vote on the current state of the proposal passed. :) (My weekend started after the vote, so I haven't yet done the admin of merging things, etc. I will do that tomorrow.)

A few notes on the discussion/vote:

  • If the current proposal goes well and you'd like to start on the future work, it would require a new tech proposal. The current vote does not guarantee that by such proposal would pass.

  • The TWG was concerned about 'single points of failure' in HF affiliated projects generally and in GHCUp's governance model specifically.

In order to address this concern the Haskell Foundation is requiring that all affiliated projects have a transition plan in the event that a maintainer becomes unreachable or unable to maintain the project any longer. In this vein I'll work with all relevant maintainers (@hasufell in this case) to develop such a transition plan. Please correct us if GHCUp already has one that we were not aware of.

@hasufell
Copy link
Contributor Author

hasufell commented Apr 21, 2024

@jmct thank you for the swift answer.

I have sent an email conversation regarding the transition plan.


I also want to give feedback regarding the overall tech proposal process.

My impression is that there were some cases of miscommunication and a couple of problems with the formal process:

  • there was a gap between what I understood from a private conversation and what was later communicated here in the thread
  • for some reason people started assuming that I'm not interested in a formal approval or that I'm too tired to update the proposal text to allow such a formal approval
  • there were two votes on the proposal. But it was not communicated to me that the first failed. A proposal should only have another vote if the first failed and the proposal got a significant update. If the proposal text is insufficient to be voted on, then that has to be communicated clearly as well. There really should be only one vote per PR.
  • at times it wasn't clear to me at all what the status is... it would help to have a clear lifecycle of proposals with labels attached

@jmct
Copy link
Contributor

jmct commented Apr 21, 2024

I appreciate the feedback.

I'll have to reflect on the miscommunication. I hope it goes without saying that I didn't intend for that, so I'll have to consider how I can minimize the chance of miscommunications in the future.

One thing I will do is institute a better process for communicating the status of proposals. I think that's something that I can do immediately that would improve the process for those who write proposals.

@hasufell
Copy link
Contributor Author

In order to address this concern the Haskell Foundation is requiring that all affiliated projects have a transition plan in the event that a maintainer becomes unreachable or unable to maintain the project any longer. In this vein I'll work with all relevant maintainers (@hasufell in this case) to develop such a transition plan. Please correct us if GHCUp already has one that we were not aware of.

The transition plan is now outlined here: https://www.haskell.org/ghcup/about/#transition-plan-in-case-of-maintainer-absence

Is there anything left to have this PR merged?

@jmct jmct merged commit 494ee2c into haskellfoundation:main Apr 23, 2024
@gbaz
Copy link
Collaborator

gbaz commented Apr 24, 2024

Glad this PR is merged!

Some brief responses to three points above, for clarity:

for some reason people started assuming that I'm not interested in a formal approval or that I'm too tired to update the proposal text to allow such a formal approval.

There was no assumption, just a question. We offered either the path of revision and acceptance or archiving. We indicated that we thought archiving was "most prudent" (i.e. simplest), but left the door open for any of the three paths discussed in #61 (comment)

there were two votes on the proposal. But it was not communicated to me that the first failed. A proposal should only have another vote if the first failed and the proposal got a significant update. If the proposal text is insufficient to be voted on, then that has to be communicated clearly as well. There really should be only one vote per PR.

You are correct there should only be one vote per PR (or per major update/revision of PR perhaps). But there was indeed only one vote that I can recall, the successful one! We prior reached a consensus to ask for revision, but took no binding votes.

at times it wasn't clear to me at all what the status is... it would help to have a clear lifecycle of proposals with labels attached

Here are our current labels. Perhaps we have not been consistent enough in applying them due to the transition in committee membership and leadership. Any suggestions for improved labels are welcome!

image

@hasufell
Copy link
Contributor Author

Thanks!

Does the file have to be moved into the accepted/ folder?

https://github.com/haskellfoundation/tech-proposals/tree/main/proposals/accepted

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants