New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: option to lock child goroutines to same OS thread #23758

Closed
shelby3 opened this Issue Feb 9, 2018 · 29 comments

Comments

Projects
None yet
10 participants
@shelby3

shelby3 commented Feb 9, 2018

I responded in a comment to a Stackoverflow question Does runtime.LockOSThread allow child goroutines to run in same OS thread? about unavailability of this feature:

Even though it’s not currently supported, it has been proposed for a new optional capability. And I have proposed other use cases.

I became motivated to propose this here when I realized a very important style of programming that can’t be done without adding this capability.

I searched for other issue threads and the somewhat related ones found:

#4056
#21827
#12462

Seems perhaps the last one #12462 could be addressed with this proposal?

How does the community feel about this proposal? Would it likely be accepted into the mainline if someone did the work?

EDIT: please read my follow-up post, wherein I explain this really isn’t about supporting FRP, yet more fundamentally about optimal event handling (in any programming paradigm) in the UI thread and lockless concurrency design. I don’t wish for readers to misinterpret this proposal as an attempt to turn Go into Haskell (I know Go’s target audience would be turned off by that). That’s not the point.

@gopherbot gopherbot added this to the Proposal milestone Feb 9, 2018

@gopherbot gopherbot added the Proposal label Feb 9, 2018

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Feb 9, 2018

I'm not sure I understand what you are proposing. If one goroutine is locked to a thread, then how can a different goroutine be locked to the same thread? What happens if both goroutines should run simultaneously?

@aclements

This comment has been minimized.

Member

aclements commented Feb 9, 2018

I became motivated to propose this here when I realized a very important style of programming that can’t be done without adding this capability.

I haven't read the entire zenscript thread you linked (or really much of it; it's pretty long :), but I don't understand what problem in FRP is solved by forcing multiple goroutines to run on the same OS thread. Could you summarize, perhaps? Go's use of OS threads is transparent unless you're interacting with things outside of Go that care about OS threads (e.g., things that have thread-local state). And locking multiple goroutines to the same OS thread isn't going to give you more control over their scheduling because the Go scheduler will still preempt and switch between them.

OTOH, having a mechanism to lock multiple goroutines to the same OS thread is a recipe for deadlocks. What if one of those goroutines enters a system call that blocks until some action is performed by the other goroutine locked to that thread?

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

@ianlancetaylor, my thought was that a goroutine can be blocked on for example I/O or waiting on a message/signal in a channel (or analog that can model waiting for events). So then another goroutine in the same thread could run. goroutines are superior to Promise for modeling “logical threads”.

Perhaps this means OS calls would have to be implicitly transferred (by the runtime, not the Go code) to a different thread in that case. I haven’t thought about the internal implementation details. Very high performance, ultra low latency use cases benefit from non-blocking I/O, so in those cases you’d want to no switch to another goroutine in the same thread.

Afaics this has some analogous capabilities to an event loop in JavaScript with callbacks. It would afaics facilitate code targeting the UI thread in JavaScript via Gopher et al (and maybe also Android per #12462?) programming. Of course we’d still want to retain the capability to spawn child worker pool goroutines which don’t have the restriction.

@aclements,

what problem in FRP is solved by forcing multiple goroutines to run on the same OS thread. Could you summarize, perhaps?

  1. Event handling is traditionally an imperative spaghetti. I posited that the FRP model I outlined would provide much saner separation-of-concerns that would provide the sort of improvements in clarity, reasoning, and reliability that Haskellites claim.

    Without restating those points in detail and even to separate my point from FRP entirely, when some UI code is waiting on events, we don’t want to have to write that code to be preemptively interrupted by each event listener running simultaneously on a different thread. The problem of thread safety and data races makes reasoning very difficult. We don’t need nor want multi-threading in UI code.

  2. Related use case (or a generalization of the prior point) is aiding lockless concurrency design by restricting to a single-thread. This can greatly aid reasoning about shared mutable state (which is one of the justifications for the JavaScript single-threaded event model), because the opportunities for the shared state to have a race condition are more restricted since only one of the goroutines can run simultaneously. This is often quite adequate, because for example there can be multiple instances of these single-threaded concurrency constructs running simultaneously.

I haven’t studied what Gopher does. Perhaps it provides access to the JavaScript event loop separately from the concept of a goroutine. Yet I would prefer to see concurrency modeled exclusively with goroutines (or even better algebraic effects, but that’s not applicable here) because goroutines are superior to Promise. With a WASM Go target under development, hopefully we won’t be forced to use JavaScript and Promise forever.

@bcmills

This comment has been minimized.

Member

bcmills commented Feb 9, 2018

a goroutine can be blocked on for example I/O or waiting on a message/signal in a channel (or analog that can model waiting for events). So then another goroutine in the same thread could run.

That is already how the Go runtime works: when one goroutine blocks, it chooses another goroutine to run on the same thread.

The problem of thread safety and data races makes reasoning very difficult.

Eliminating threading does not eliminate concurrency bugs or aliasing bugs. Communicating by sharing memory is precarious even if you can assume that operations won't be preempted.

aiding lockless concurrency design by restricting to a single-thread

You can already restrict a variable to a single goroutine (by not passing it to other goroutines). Can you give some more concrete examples?

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

Eliminating threading does not eliminate concurrency bugs or aliasing bugs.

Not entirely, but it can greatly facilitate reasoning and even can facilitate proving safety more easily and with more algorithmic flexibility than Rust’s lifetimes + exclusive mutable borrowing. I analyzed this in detail in issue thread 35 at the Zenscript repository. I even showed false positive cases that Rust’s checker can’t prove are safe, yet are safe. Willy-nilly preemption is often impossible to reasonable about and we punt to locking with mutexes which can’t be proven to not have deadlocks nor livelocks.

Afaik, Go can’t even prove safety at all, and punts to a runtime check, which of course isn’t safety unless you’re sure your unit tests have hit every corner of the universe.

Here I am suggesting how in general to get better safety margins easily (and also enabling easier and more optimal programming for a single UI thread with concurrency as one specific use case). Isn’t Go all about making it easier and hassle-free for programmers? What we like to refer to as a “no brainer” decision if it doesn’t have any unforeseen downsides.

Why go multi-threaded in the cases where the programmer doesn’t want it? What is the downside of providing the requested feature?

The FRP use case I referred to in the OP is a prime example of provable safety obtained with single-threading multiple logical threads. For optimal coding (least boilerplate and conflation of separate concerns) it requires continuations, i.e. goroutines. Implementing event listener code with callbacks or even Promise conflates what should be separate concerns. I explained that in more detail in the link I provided explaining why goroutines are superior to Promise. This doesn’t mean the feature request is only applicable to FRP. Rather I’m presenting it as an example of how to attain provably safe lockless code if all the continuations run in the same OS thread.

(Btw, having to beg for this feature is yet another reason that algebraic effects are conceptually superior to goroutines, because programmer has full control over the continuations and can decide whether to use single-threads or M:N in the handlers, although goroutines may be and likely are more performance optimized, so I’m hoping this feature makes sense for Go)

Communicating by sharing memory is precarious even if you can assume that operations won't be preempted.

Part of the challenge is proving when sharing is not overlapping. Simply forbidding all sharing (such as everything must be copied) even when not overlapping is algorithmically inflexible, as a dual to the inflexibility of Rust which forces exclusive borrowing everywhere.

You can already restrict a variable to a single goroutine (by not passing it to other goroutines).

That doesn’t facilitate multiple logical threads with only one running at the same time. Or does it somehow that I’m not aware of?

Can you give some more concrete examples?

The asynchronous single-threaded concurrency that can be obtained with JavaScript Promises examples, except afaics without their negatives if we have the feature I requested for goroutines.

Promises are also creating logical threads with their own “stacks”, but those stacks are closures stored on the heap. They run in a single-thread. The goroutines provide a similar functionality (if they can be optionally requested to run in the same OS thread), and with a superior semantics (and also I believe more performant) than Promise as explained at the link I provided.

@aclements

This comment has been minimized.

Member

aclements commented Feb 9, 2018

  1. Event handling is traditionally an imperative spaghetti. I posited that the FRP model I outlined would provide much saner separation-of-concerns that would provide the sort of improvements in clarity, reasoning, and reliability that Haskellites claim.

Without restating those points in detail and even to separate my point from FRP entirely, when some UI code is waiting on events, we don’t want to have to write that code to be preemptively interrupted by each event listener running simultaneously on a different thread. The problem of thread safety and data races makes reasoning very difficult. We don’t need nor want multi-threading in UI code.

Right, I understand the differences between event-driven programming and FRP. What I don't understand is what this has to do with locking multiple goroutines to the same OS thread. This doesn't solve anything about thread safety and data races and more than, say, running a multi-threaded program on a single core CPU does. The goroutines can still context switch at any point, so you still need to protect thread safety.

  1. Related use case (or a generalization of the prior point) is aiding lockless concurrency design by restricting to a single-thread. This can greatly aid reasoning about shared mutable state (which is one of the justifications for the JavaScript single-threaded event model), because the opportunities for the shared state to have a race condition are more restricted since only one of the goroutines can run simultaneously. This is often quite adequate, because for example there can be multiple instances of these single-threaded concurrency constructs running simultaneously.

It doesn't help here either, again because goroutines can context switch at any time. The only "advantage" of restricting to a single OS thread is that you could use non-atomic load and store operations, but even then you need atomics for read-modify-write operations, such as adding to a shared variable.

Here I am suggesting how in general to get better safety margins easily

That would obviously be great, but locking multiple goroutines to the same OS thread in no way achieves this.

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

The goroutines can still context switch at any point

Oh I didn’t contemplate that in the single-threaded case. I thought they only context switch when blocked or non-deterministically to balance load/starvation. Why would they context switch for reason other than blocking when confined to switching within the same OS thread? Would they switch at non-deterministic points to prevent starvation of the other goroutines in the same OS thread?

So in the single-threaded case, I suppose I’m expecting that the programmer is given manual control over starvation and the goroutines must only block deterministically.

I suppose I grok the argument for benefit of not giving the programmer manual control in the M:N case, analogous to arguments that can be made about letting the runtime handle memory GC.

That would obviously be great, but locking multiple goroutines to the same OS thread in no way achieves this.

Afaics it would achieve it, if we can (even optionally) have deterministic (i.e. compile-time) reasoning about context switch points in the single OS thread case.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Feb 9, 2018

Frankly it sounds like you are looking for a different language.

You could write what you want in Go by having a goroutine lock itself to a thread and then manage work queue, where the functions in the work queue decide when they want to yield. But I really can't see changing Go to work in that model natively.

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

Frankly it sounds like you are looking for a different language.

Maybe so, but could someone please explain why you want to handicap the use case of Go for UI programming?

There’s apparently been significant demand for transpiling Go to the browser for example, such that there’s already three such transpilers and apparently one or two WASM compilers in development.

You have this CSP goroutine model that could be much better than using Promise style cruft which conflates control structure with functions, and all you need to do is restrict context switching to blocking when the programmer has requested it (and requested the child goroutines run in the same OS thread).

Also afaics the single-threaded model I proposed with better invariants for lockless safety, could also potentially be employed in general cases, not just UI. Where the benefits from improved invariants outweighs the M:N automagic in the runtime for those cases. The programmer could mix-and-match runtime M:N and single-threaded paradigms as fitness dictates.

You could write what you want in Go by having a goroutine lock itself to a thread and then manage work queue, where the functions in the work queue decide when they want to yield.

Isn’t this essentially simulating promises and the lack of fined-grained control over the continuation (which afaics only the plurality of goroutines can provide unless we essentially perform whole program transformation to a switch block for emulating continuations, which would require a transpiler and probably be slow and highly unoptimized) which causes the cruft and conflation that makes coding horrific. I thought you want Go to be elegant with code that isn’t unnecessarily convoluted just to work around lack of a necessary language feature?

@aclements

This comment has been minimized.

Member

aclements commented Feb 9, 2018

Why would they context switch for reason other than blocking when confined to switching within the same OS thread? Would they switch at non-deterministic points to prevent starvation of the other goroutines in the same OS thread?

This can happen for various reasons. The runtime preempts goroutines after 10--20ms just to maintain fairness and to eliminate some deadlock situations (for example, a goroutine looping on some shared state, waiting for another goroutine to change it; not that a busy wait like that is recommended). Preemption can also happen because of the GC, for example during STW or when the GC needs to scan a goroutine's stack.

So in the single-threaded case, I suppose I’m expecting that the programmer is given manual control over starvation and the goroutines must only block deterministically.

In a sense, you already have this through channels. The fact that the goroutines may run on different OS threads is largely irrelevant (and the Go runtime will try to keep them on the same thread). In fact, I've done explicit co-routine-like scheduling in Go for the purposes of systematic concurrent algorithm model checking and it works quite well: https://github.com/aclements/go-misc/blob/master/go-weave/weave/weave.go. You still get the separate stacks and control flow of each goroutine. You just have to manage the channel blocking so that only one goroutine is unblocked at a time, which isn't very hard to do if you're already reasoning about all blocking behavior.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Feb 9, 2018

Maybe so, but could someone please explain why you want to handicap the use case of Go for UI programming?

I don't, but I also do not understand why the feature you are asking for is required for UI programming. To me it sounds like you are asking for greater complexity--as you can from your reaction to my suggestion for how you could implement it today. Any system that requires goroutines to explicitly yield is more complex and harder to program than the current system.

Also afaics the single-threaded model I proposed with better invariants for lockless safety, could also potentially be employed in general cases, not just UI. Where the benefits from improved invariants outweighs the M:N automagic in the runtime for those cases. The programmer could mix-and-match runtime M:N and single-threaded paradigms as fitness dictates.

Speaking personally, I do not agree. I do not want programmers to have to reason about lockless safety. Experience shows clearly that most programmers get it wrong. I want programmers to use simple building blocks that are clearly correct, and I want those building blocks to be fast enough to nobody avoids them. (You may argue that explicit yielding makes lockless safety easy, but it requires that you carefully avoid yielding while holding an implicit lock, and that is hard to prove as your program changes over time.)

I think we need to take a big step back and understand the problem you want to solve, rather than framing it in terms of a solution. The best I can figure out at the moment is that you want to be able to write safe lockless concurrent programming. Go lets you do that using channels--of course, that works because channels have implicit locks. To be able to write concurrent code without even implicit locks is not a goal of Go; that is much more in Rust's bailiwick.

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

@aclements,

This can happen for various reasons. The runtime preempts goroutines after 10--20ms just to maintain fairness and to eliminate some deadlock situations (for example, a goroutine looping on some shared state, waiting for another goroutine to change it; not that a busy wait like that is recommended).

Ah yes makes sense, but if there’s no other way to achieve what I think is necessary, I would argue the programmer should be allowed to turn off that heuristic if his use case benefits more than the tradeoff of not having that heuristic. Yet see below…

Preemption can also happen because of the GC, for example during STW or when the GC needs to scan a goroutine's stack.

But afaik that preemption doesn’t have any impact because it’s not preempting with another goroutine. The resumption could in theory continue the goroutine that was preempted. Or am I missing something?

In a sense, you already have this through channels. The fact that the goroutines may run on different OS threads is largely irrelevant (and the Go runtime will try to keep them on the same thread). In fact, I've done explicit co-routine-like scheduling in Go for the purposes of systematic concurrent algorithm model checking and it works quite well: https://github.com/aclements/go-misc/blob/master/go-weave/weave/weave.go. You still get the separate stacks and control flow of each goroutine. You just have to manage the channel blocking so that only one goroutine is unblocked at a time, which isn't very hard to do if you're already reasoning about all blocking behavior.

Okay yeah I see now that if the programmer will explicitly control the blocking of all goroutines he wishes to run only one at a time, then whether they run in different OS threads or not is irrelevant.

But here we’re forcing into userland or library code what could and arguably should be in the runtime.

And there could be race conditions because for example when we want what is essentially a select and we want the goroutines to block on I/O, channels and what have you, then the goroutines may be blocked into non-userland code (e.g. for I/O) where our userland blocking scaffolding can’t reach. So then we may have more than one goroutine unblock simultaneously and our blocking scaffolding then has a race condition. I’m just quickly visualizing this in my head, so please do correct me if you find a mistake in my reasoning.


@ianlancetaylor,

I don't, but I also do not understand why the feature you are asking for is required for UI programming.

Because in event handling we have to deal with concurrency because events fire asynchronously. And thus either we need to use some callback model, which conflates control structure with functions, and creates an imperative mess of spaghetti which also is hard to reason about even though it runs in single-thread. Or we can more elegantly use continuations (i.e. goroutines) which unconflate and then I and others can build elegant paradigms on top of that such as FRP which provide complete safety.

In either case, I argue we need protection from willy-nilly preemption which serves no purpose in UI programming and only makes safety much more difficult to achieve.

Any system that requires goroutines to explicitly yield is more complex and harder to program than the current system.

Use cases dictate analysis. In the M:N use cases, I agree with you as I mentioned by “grok” upthread. But afaics the UI use case can not be nailed with the same hammer. Go was clearly first prioritized around server programming. We may have to adjust our thinking a bit. Perhaps I am totally wrong though. I am eager to read the replies and continue absorbing new information.

Speaking personally, I do not agree. I do not want programmers to have to reason about lockless safety. Experience shows clearly that most programmers get it wrong. I want programmers to use simple building blocks that are clearly correct,

Which is precisely what I propose FRP could accomplish in the event handling case. In any case, I fail to understand how throwing promisicrufication or willy-nilly preemption at UI coding will make it easier than restricting to a wider safety margin of single-threaded continuations. But I’m not an expert actually on this, so perhaps I’ve overlooked something?

You may argue that explicit yielding makes lockless safety easy

Not necessary. I can just argue it is less worse than the alternative?

The best I can figure out at the moment is that you want to be able to write safe lockless concurrent programming. Go lets you do that using channels--of course, that works because channels have implicit locks.

I think I may have shown above in my response to @aclements, that scaffolding can create races. And also I presume it’s a lot cruft where I posit it could be elegantly handled by the runtime.

To be able to write concurrent code without even implicit locks is not a goal of Go; that is much more in Rust's bailiwick.

Afaics that you’re placing my proposal in a taxonomy it doesn’t belong. I am asking for explicit opportunities for context switching on explicit blocking (no locks). Rust assumes preemption every where, and thus must prove exclusive borrowing of objects everywhere. That is very onerous and inflexible. Whereas, I am coaxing out the easy safety by setting some simple non-obtrusive invariants that fit well in some use cases.

There’s appears to be some cognitive dissonance (maybe on both of us to some extent). I hope we can find a way to communicate our respective understandings and discover our misunderstandings.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Feb 9, 2018

I am asking for explicit opportunities for context switching on explicit blocking (no locks).

Clearly you can do that by running code in a single goroutine. So my current understanding is that you want goroutines that only switch to other goroutines on some sort of explicit yield. In the absence of an explicit yield, you want that goroutine to continue running. In particular, even if the goroutine blocks, say on a network call, you do not want other goroutines to run.

If that understanding is correct, then presumably this only applies to some set of goroutines. It can't apply to goroutines produced by the Go runtime itself, such as for the garbage collector. So there needs to be some way to define a set of goroutines.

Someone reading or modifying your code has to have a clear understanding of which code can be executed by goroutines in this special set. Presumably the program will be written with the understanding that some data can only be accessed by a single goroutine at a time, because other code somewhere else ensures that all goroutines that access that data are in this special set.

To me that all seems complex. But I may well misunderstand the model you want.

Suppose instead that every time you start a task, whatever that means in your system, you acquire a lock, and every time you explicitly yield, you release the lock. To me that seems explicit and clear. The cooperation is enforced by the explicit locks, rather than by the implicit scheduling locks of the approach described above.

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

I am asking for explicit opportunities for context switching on explicit blocking (no locks).

Clearly you can do that by running code in a single goroutine.

No afaics we can’t get explicit opportunities for context switching in a single goroutine without JavaScruft-like promisicrufication, which I have already stated 3 or 4 times (with a link 2 or 3 times for more details) is an inferior paradigm (because for example it conflates control structure with the function construct, thus making functional programming non-composable and other issues).

I posit we need the explicit forking of continuation control structure via a plurality of goroutines, each containing a plurality of explicit opportunities for context switching on operations that block, such as I/O, event handlers, or channel induced blocking.

Yet in order for that to not break invariants that can help to enable lockless safety (either entirely via for example a FRP library built on top or at least wider safety than otherwise), I posit we need to restrict to a single OS thread.

I think perhaps @aclements may understand my proposal well because his replies have been accurate to the point. So perhaps he may be able to write something to help clarity. I’m loathe to try to think of a representative example code that could make it all clear at 5am (if that is even possible) and haven't slept yet.

So my current understanding is that you want goroutines that only switch to other goroutines on some sort of explicit yield. In the absence of an explicit yield, you want that goroutine to continue running. In particular, even if the goroutine blocks, say on a network call, you do not want other goroutines to run.

That does not at all describe what I was proposing. I stated that I want context switching to occur on blocking operations but not willy-nilly at any non-deterministic heuristic that isn’t knowable at compile-time. The blocking operations such as an I/O call are explicit at compile-time.

In my first reply to you upthread, I even differentiated my proposal from cases where we don’t want to context switch on blocking I/O due to low-latency priorities such as for high-performance servers on low-latency networks (such in specialized banking or financial trading networks). I hope that didn’t confuse you and lead you to believe I wanted that for my proposal.

If that understanding is correct, then presumably this only applies to some set of goroutines. It can't apply to goroutines produced by the Go runtime itself, such as for the garbage collector. So there needs to be some way to define a set of goroutines.

You’re still correct that my proposal would only apply to only some child goroutines explicitly annotated by the programmer to run in the same OS thread.

Someone reading or modifying your code has to have a clear understanding of which code can be executed by goroutines in this special set. Presumably the program will be written with the understanding that some data can only be accessed by a single goroutine at a time, because other code somewhere else ensures that all goroutines that access that data are in this special set.

You’re correct that in a general use case analysis that it could potentially be a global reasoning depending on how it is utilized by the programmer. But there’s also the potential to build libraries on top of this capability, such as the FRP concept I mentioned wherein the programmer employing the library API never deals with it. It would be 100% safe and the user would only use the API (or in my case if I made a PL that transpiled to Go).

To me that all seems complex. But I may well misunderstand the model you want.

The global reasoning would be complex if you presume that is the way it will be used, but in terms of reasoning about safety of shared state it would in any case be less complex than M:N goroutines that preempt anytime, anywhere (note I do understand that your stance is that by making it more complex then no one will use shared state ever, but that belies the reality of some UI coding for example). Moreover, as I implied this could be used in local situations such as a set of events for a specific dialog box for example and the set of goroutines for that dialog box being used only by the code of that dialog box. And then as I implied by the FRP example, even better if wrapped in a UI components library (even if not FRP) so the programmer employing the library API never deals with these special sets.

Suppose instead that every time you start a task, whatever that means in your system, you acquire a lock, and every time you explicitly yield, you release the lock. To me that seems explicit and clear. The cooperation is enforced by the explicit locks, rather than by the implicit scheduling locks of the approach described above.

Afaics, there’s absolutely no locks in my proposal. I wrote “lockless”. I really mean it. Unless somehow my conceptualization is discobobulated (and this wouldn’t be the first time so it’s plausible it is).

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

@bcmills please justify your downvote of the proposal (at the OP). I haven’t seen your rebuttal to my response to you. I’m thinking perhaps you’re downvoting a proposal which you’ve misconstrued? Why rushing to a conclusion about the merits? Do you know everything about this issue that enables you to determine the merits so prematurely? If so, please kindly correct me.

I don’t know why you downvoted. I thought I addressed your points. I think it’s acceptable for me to desire to know why people are downvoting, especially those who challenged me in the thread and were rebutted without further response. I believe in a meritocracy and making rational justifications. If it’s irrational politics or purely based on gut instincts or something like that, then I’m gone from here.

I hope you understand that just like you, I don’t invest my effort here at no cost to myself. I do it sincerely because I think there’s something important that impacts work. I’m extremely conscientious and expect other professionals to be so also.

Also I’m differentiating voting on a comment somewhere in a thread from downvoting the entire proposal (at the OP), which hasn’t yet been proven to be arguable or subjective. Also because I have some inkling that you’re somewhat VIP around here or something like that. I think it’s quite premature when apparently my proposal has been misconstrued to some degree by some. At this point, we’re still trying to ascertain whether there’s an objective analysis which can show the proposal is unarguably correct or undesirable.

Obviously I am thinking the proposal is quite important and not something to be taken lightly. If I can determine I am incorrect then that will okay with me. But for the moment, I am thinking it is a make or break issue for whether I can choose to adopt Go.

P.S. I remember you from thread of discussion about adding generics/modules to Go. And you were adamant/indignant about an issue (w.r.t. to the undeclared invariants of modularity) that was incorrect. So it had registered in my mind that you were that sort of person who thinks he’s always correct and potentially a difficult person to reach rational consensus with. I’m registering now my devaluation of your vote as being not meaningful until you show me your rational reasoning.

META: To those who downvote this post, you can f-off with your irrational, childish nonsense. This is a serious thread kiddies. Go play your games somewhere else. Girls this is not an emotional issue. Downvoting based on emotions is for ideological suicide. Open source doesn’t mean design by emotional consensus building. I noticed a wave of downvotes to the OP after I added this comment presumably reacting to this comment as if the merits of the proposal has anything to do with this comment.

@bcmills

This comment has been minimized.

Member

bcmills commented Feb 9, 2018

@bcmills please justify your downvote of the proposal (at the OP). I haven’t seen your rebuttal to my response to you.

I think you have fundamentally misunderstood Go's concurrency idioms and design tradeoffs. I think this proposal would add an extreme amount of needless complexity to the language and runtime, in order to enable a code structure that goes directly against the maintainability goals that motivated the language in the first place.

@ericlagergren

This comment has been minimized.

Contributor

ericlagergren commented Feb 9, 2018

...you can f-off with your irrational, childish nonsense. This is a serious thread kiddies. Go play your games somewhere else. Girls this is not an emotional issue...

This is not productive, professional, nor polite.

For the record, the thumbs up and down are used to show support of/opposition to an issue or a comment without "me too!" spam.


For the record, I do like the idea insofar as I'm aware it helps GUI programming since certain "tasks" (for lack of a better word) sometimes require being accessed from the same OS thread. Pinning would be useful in that case. I know it's been a complaint some individuals have had.

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

I think you have fundamentally misunderstood Go's concurrency idioms and design tradeoffs

That’s a big claim, with no proof. Where’s the specific arguments?

I think this proposal would add an extreme amount of needless complexity to the language and runtime

Needless? Have you demonstrated how event handling for UI coding for example can be done any where near as well without my proposal? Are you advocating a duplication of concepts by having promisicrufication in addition to goroutines for modeling asynchronicity?

It may add complexity to the runtime. But do we really know how much complexity? Have we analyzed that?

in order to enable a code structure that goes directly against the maintainability goals that motivated the language in the first place.

A code structure that would be entirely inapplicable to the original M:N focus of servers for Go. Now we’re talking UI code and afaics you have yet to catch on to the key distinctions and have entirely or significantly misconstrued (as I expected). Frankly, I view your vote as a lazy one, which is why I complained. I’m interested to dig into any specifics with you if you wish.


This is not productive, professional, nor polite.

Neither is childish downvoting a request for transparency and rationality and not lazy guesswork. If you have influence around here, then the conscientious paradigm is to not shoot down proposals which you don’t have time to invest in the specifics, because your influence is not just a little bit harmful when incorrect. Now he might end up being correct, but he also might now. I would like to see his specific arguments.

Go 2 may be the last major innovation point for Go (the more inertia the more difficult to make changes as Python 3 exemplified), so it behooves us to not make a mistake on a major issue such as how do handle UI programming. So we should have level-headed, and complete transparency, not just some flippant rush to a conclusion because some Golanders misconstrue that I’m threatening the purity/simplicity/consistency of Go concepts/religion. Specifics matter. The devil is in the details.

For the record, the thumbs up and down are used to show support for an issue or a comment without "me too!" spam.

That’s perfectly reasonable when used for that purpose.

This issue tracker is governed by the Go code of conduct

For the record, I never follow any such rules. As a matter of principle, I refuse to read them. I do what I think is correct, whatever may come so be it. I think votes can be employed in a way that is uncivil, and I will respond if I think it is necessary. If that disqualifies me from participation, then so be it. Nevertheless I prefer civil and hope others do too.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Feb 9, 2018

This issue tracker is governed by the Go code of conduct, which you can read at https://golang.org/conduct. Some of the comments here do not follow those guidelines. Please take a breath and keep it civil. Thanks.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Feb 9, 2018

Afaics, there’s absolutely no locks in my proposal. I wrote “lockless”. I really mean it.

Because your proposal does not reduce the Go scheduler to be entirely single-threaded, I believe that it can not be implemented without the scheduler taking locks, as it normally does. So I believe that there will inevitably be implicit locks.

Why is it important to you that this proposal be lockless?

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Feb 9, 2018

That does not at all describe what I was proposing. I stated that I want context switching to occur on blocking operations but not willy-nilly at any non-deterministic heuristic that isn’t knowable at compile-time. The blocking operations such as an I/O call are explicit at compile-time.

I want to clarify that this is not the case at present. Currently when the compiler sees a call to some function F it has no idea whether F can block or not. Whether a given function call may or may not block is not known anywhere. Whether a given function call does in fact block in practice is only known at run time.

I don't think that matters for the proposal of always running a set of goroutines on a single thread. When one of those goroutines blocks, the scheduler can choose a different one. But I do want to say that in my opinion that proposal will never be adopted as is. There is significant complexity in the scheduler to support the current runtime.LockOSThread; we are extremely unlikely to add significant additional complexity to the scheduler to support keeping a set of goroutines on a single thread.

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

I want to clarify that this is not the case at present. Currently when the compiler sees a call to some function F it has no idea whether F can block or not. Whether a given function call may or may not block is not known anywhere. Whether a given function call does in fact block in practice is only known at run time.

Ty for sharing that datum. But even if that is an invariant that can’t/won’t ever be changed (and maybe rightly so), if my point to @aclements about race conditions is incontrovertible then for implementing my FRP paradigm w.r.t. to events, we’d still need my proposal even if we can’t reason about which operations within a goroutine can block presuming the wait for each event block is in the OS (and not in channel scaffolding and which the channel scaffolding can’t control). So in that way, you get what you want which is to discourage fine-grained reasoning about lockless safety and I get what I want which is a race-free way to construct an FRP paradigm to make UI coding not promisicrufication or some other imperative spaghetti of callbacks.

The FRP idea wouldn’t ever allow any shared state within the plurality of selected goroutines (by way of channels obviously). The single OS thread restriction would be merely to make sure that more than one goroutine can’t complete some blocking I/O call at the same time which causes an event to fire, thus having more than one event in queue. I postulated this queue would be a race condition, because the two events may not be commutative in time (I could elaborate if necessary).

However, that may or may not still be somewhat unsatisfying. I’m contemplating whether if we could reason about context switch opportunities in general this might provide for other programming patterns and libraries which provide data race safety in a lockless design. But the devil is in the details here and I don’t have enough experience to answer this at this time.

I presume you all realize that by making that non-deterministic design choice, you’ve relegated 100% race safety to disallowing any shared mutable state between goroutines, unless something like Rust’s onerous exclusive mutable borrowing is added. Both are onerous extremes. I’m trying to find some invariants for a more flexibility middle ground which library writers can leverage.

I don't think that matters for the proposal of always running a set of goroutines on a single thread. When one of those goroutines blocks, the scheduler can choose a different one.

Afaics it matters as stated above.

But I do want to say that in my opinion that proposal will never be adopted as is.

Well given the new datum you offered, I can see why there might be less motivation.

But FRP alone could be a major win I think. And there may be other libraries that could be created even without the explicitness of which operations block (and thus offer a context switch opportunity).

There is significant complexity in the scheduler to support the current runtime.LockOSThread; we are extremely unlikely to add significant additional complexity to the scheduler to support keeping a set of goroutines on a single thread.

You may be correct to suggest I may want a different language. Or that I’m heading down an incorrect path.

How are you guys coding UI? Promisicrufication? Event registration, deregistration, callback imperative buggy hell?

Because your proposal does not reduce the Go scheduler to be entirely single-threaded, I believe that it can not be implemented without the scheduler taking locks, as it normally does. So I believe that there will inevitably be implicit locks.

How can the locks that the scheduler makes diminish the lockless property of a single-threaded set of goroutines? Afaics it can’t. By nature of the logic depending on single-threaded, then a lock in the scheduler is not a lock in the logical thread space of the said set.

Why is it important to you that this proposal be lockless?

In general case, it is very unlikely that locks can be proven to be free of deadlocks and live locks. Locking should be avoided if possible.

@as

This comment has been minimized.

Contributor

as commented Feb 9, 2018

A code structure that would be entirely inapplicable to the original M:N focus of servers for Go. Now we’re talking UI code and afaics you have yet to catch on to the key distinctions and have entirely or significantly misconstrued (as I expected).

That reflects on the lucidity and rigor of this proposal rather than the individuals who gave it a thumbs down. It's nice that you're excited about a new framework, but it would be more exciting to the rest of us if you attempt to explain what it is and what problem it solves in a manner that's relevant to Go (and your ultimate proposal) rather than claiming that it's important and subsequently linking to another discussion as supporting evidence.

@shelby3

This comment has been minimized.

shelby3 commented Feb 9, 2018

That reflects on the lucidity and rigor of this proposal rather than the individuals who gave it a thumbs down.

Or one could argue it reflects equally on the way 5 year olds need to be spoon fed, as not being very efficient. I think we should ban links from the web. And force all research papers to stop writing their code examples in Haskell and write them in Go because Golanders refuse to be multilingual? I empathize actually. I also struggled reading those research papers.

I think we have to take what we can get in life. Sometimes it seems I can’t force the people who know all that stuff to present in the way that is easiest for me. Yet I think it is very important so invest the effort, because it matters. All of us are busy. I am here trying to respond and explain details as people respond in thread.

@bcmills quit after one post and made his decision without really trying to engage me further. I think that exemplifies who is applying effort here.

@zeebo

This comment has been minimized.

Contributor

zeebo commented Feb 9, 2018

I believe the discussion would be helped with a small code sample that demonstrates the deficiency, and how the proposal (using whatever API/syntax) would help fix it. That would help focus on the concrete details, and avoid confusion.

@as

This comment has been minimized.

Contributor

as commented Feb 10, 2018

not being very efficient

It is not efficient for you, but it is for the audience of the proposal. Hence, it is globally optimal.

I think we should ban links from the web. And force all research papers to stop writing their code examples in Haskell and write them in Go because Golanders refuse to be multilingual?

I can't tell if this is an addendum to the proposal or not.

@golang golang locked as too heated and limited conversation to collaborators Feb 10, 2018

@davecheney

This comment has been minimized.

Contributor

davecheney commented Feb 10, 2018

Time for everyone to cool off.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Feb 10, 2018

Just for the record, an example of a UI written in Go can be seen at golang.org/x/exp/shiny.

@rsc

This comment has been minimized.

Contributor

rsc commented Mar 5, 2018

Sorry, but no. This is a whole different scheduler from Go's current design, not just a simple tweak, as @aclements comments start to explain.

@rsc rsc closed this Mar 5, 2018

@golang golang unlocked this conversation May 25, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment