New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: spec: add support for unlimited capacity channels #20352

Closed
rgooch opened this Issue May 13, 2017 · 67 comments

Comments

Projects
None yet
@rgooch

rgooch commented May 13, 2017

Proposal: when creating a channel, if the capacity passed to the make builtin function is negative, the channel will have unlimited capacity. Such channels will never block when sending and will always be ready for sending.

Rationale: channels are a natural way to implement queues. When processing streams of data, it is unknown how many data elements will be sent. In some cases, fixed length channels can lead to deadlocks. These deadlocks can be eliminated with unlimited capacity channels.

This is how I currently work around the limitation:

func NewQueue() (chan<- interface{}, <-chan interface{}) {
    send := make(chan interface{}, 1)
    receive := make(chan interface{}, 1)
    go manageQueue(send, receive)
    return send, receive
}

func manageQueue(send <-chan interface{}, receive chan<- interface{}) {
    queue := list.New()
    for {
        if front := queue.Front(); front == nil {
            if send == nil {
                close(receive)
                return
            }
            value, ok := <-send
            if !ok {
                close(receive)
                return
            }
            queue.PushBack(data)
        } else {
            select {
            case receive <- front.Value:
                queue.Remove(front)
            case value, ok := <-send:
                if ok {
                    queue.PushBack(data)
                } else {
                    send = nil
                }
            }
        }
    }
}

The disadvantage of this workaround is that it forces all the users to perform type casting, so you lose the compile-time type checking. If you want compile-time type checking you need to re-implement the above code over and over again for each queue type. Unlimited length channels would avoid the need for all that boilerplate.

@bradfitz bradfitz changed the title from runtime: add support for unlimited capacity channels to proposal: language: add support for unlimited capacity channels May 13, 2017

@gopherbot gopherbot added this to the Proposal milestone May 13, 2017

@gopherbot gopherbot added the Proposal label May 13, 2017

@bradfitz

This comment has been minimized.

Show comment
Hide comment
@bradfitz

bradfitz May 13, 2017

Member

All language changes are currently on hold so don't expect a timely response for this proposal. Others are welcome to discuss, though.

Member

bradfitz commented May 13, 2017

All language changes are currently on hold so don't expect a timely response for this proposal. Others are welcome to discuss, though.

@cznic

This comment has been minimized.

Show comment
Hide comment
@cznic

cznic May 13, 2017

Contributor

Unlimited capacity channels ask for a machine with unlimited memory.

Contributor

cznic commented May 13, 2017

Unlimited capacity channels ask for a machine with unlimited memory.

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch May 13, 2017

Yes, technically this assumes a machine with unlimited memory in the case where there's no bound on work and no dequeuing before running out of memory. That's a narrow subset of workloads. That doesn't invalidate the merit of this proposal.

rgooch commented May 13, 2017

Yes, technically this assumes a machine with unlimited memory in the case where there's no bound on work and no dequeuing before running out of memory. That's a narrow subset of workloads. That doesn't invalidate the merit of this proposal.

@cznic

This comment has been minimized.

Show comment
Hide comment
@cznic

cznic May 13, 2017

Contributor

That's a narrow subset of workloads.

This happens In every long running process where any of the producers outraces, even by little, the respective consumer(s). That's why channel operations block in the first place. The blocking is not evil, it's the necessary synchronization between producers and consumers.

Contributor

cznic commented May 13, 2017

That's a narrow subset of workloads.

This happens In every long running process where any of the producers outraces, even by little, the respective consumer(s). That's why channel operations block in the first place. The blocking is not evil, it's the necessary synchronization between producers and consumers.

@DeedleFake

This comment has been minimized.

Show comment
Hide comment
@DeedleFake

DeedleFake Jun 5, 2017

@cznic: Does that mean that you shouldn't be able to expand slices with append() because you might run out of RAM?

Maybe there could be an append-like functionality for channels... Might give more control over it that way.

DeedleFake commented Jun 5, 2017

@cznic: Does that mean that you shouldn't be able to expand slices with append() because you might run out of RAM?

Maybe there could be an append-like functionality for channels... Might give more control over it that way.

@rsc rsc added the Proposal-Hold label Jun 5, 2017

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Jun 9, 2017

@cznic: Just because you have a long running process with producers and consumers does not imply that producers which outrace consumers are always outracing the consumers. It's a common pattern to have a producer periodically stream a bounded (but unknowable ahead of time) quantity of data and the consumer sometimes falls behind for a while and then either catches up or the producer stops for a while or permanently.

Regarding the question that @DeedleFake poses: it does seem inconsistent to say "growth through append is OK but growth through channels is unsafe".

My proposal does not force you to accept unbounded memory growth: you can still set the size of channels as before. My proposal simply gives people the option to allow for automatic growth, in a way that is clean and efficient (compared to say the workaround I described in my opening post).

@DeedleFake: What syntax would you propose? Would it work seamlessly with the existing syntax for reading, writing and selecting on channels? How exactly would your proposal give more control? With my proposal, if you are concerned about bounding growth - but don't want to set a hard cap - you have the option of checking the length of the channel and applying some application specific back-pressure.

rgooch commented Jun 9, 2017

@cznic: Just because you have a long running process with producers and consumers does not imply that producers which outrace consumers are always outracing the consumers. It's a common pattern to have a producer periodically stream a bounded (but unknowable ahead of time) quantity of data and the consumer sometimes falls behind for a while and then either catches up or the producer stops for a while or permanently.

Regarding the question that @DeedleFake poses: it does seem inconsistent to say "growth through append is OK but growth through channels is unsafe".

My proposal does not force you to accept unbounded memory growth: you can still set the size of channels as before. My proposal simply gives people the option to allow for automatic growth, in a way that is clean and efficient (compared to say the workaround I described in my opening post).

@DeedleFake: What syntax would you propose? Would it work seamlessly with the existing syntax for reading, writing and selecting on channels? How exactly would your proposal give more control? With my proposal, if you are concerned about bounding growth - but don't want to set a hard cap - you have the option of checking the length of the channel and applying some application specific back-pressure.

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Jun 20, 2017

@rsc: What will it take to move this from "hold" to active consideration?

rgooch commented Jun 20, 2017

@rsc: What will it take to move this from "hold" to active consideration?

@DeedleFake

This comment has been minimized.

Show comment
Hide comment
@DeedleFake

DeedleFake Jun 20, 2017

I'm not sure about syntax. I was just thinking out loud, mostly. The idea was that with a channel, when if len becomes greater than cap, it blocks when sending. append() increases cap for slices, though, although it does involve reallocation and copying, obviously. I was just thinking that there's no way to change the capacity of a channel without just making an entirely new channel, manually copying everything from the previous channel, and making sure that the new channel replaces the old channel in every thread.

DeedleFake commented Jun 20, 2017

I'm not sure about syntax. I was just thinking out loud, mostly. The idea was that with a channel, when if len becomes greater than cap, it blocks when sending. append() increases cap for slices, though, although it does involve reallocation and copying, obviously. I was just thinking that there's no way to change the capacity of a channel without just making an entirely new channel, manually copying everything from the previous channel, and making sure that the new channel replaces the old channel in every thread.

@rsc rsc changed the title from proposal: language: add support for unlimited capacity channels to proposal: spec: add support for unlimited capacity channels Jun 20, 2017

@rsc rsc added the Go2 label Jun 20, 2017

@rsc

This comment has been minimized.

Show comment
Hide comment
@rsc

rsc Jun 20, 2017

Contributor

We're not considering any significant language changes today. I'm just organizing.

Contributor

rsc commented Jun 20, 2017

We're not considering any significant language changes today. I'm just organizing.

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Jun 21, 2017

To my mind, the value of building this into the language (or enabling implementation in a plugin?) is in reducing the expense of a-goroutine-plus-two-channels per channel -- much more than an ordinary channel.

I'm working on an app where every recently-connected client needs one of these.

networkimprov commented Jun 21, 2017

To my mind, the value of building this into the language (or enabling implementation in a plugin?) is in reducing the expense of a-goroutine-plus-two-channels per channel -- much more than an ordinary channel.

I'm working on an app where every recently-connected client needs one of these.

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Jun 21, 2017

@rsc: How is a "significant" change to the language defined? This is 100% backwards compatible and is a very minor API tweak.

What's the timeline for go2?

rgooch commented Jun 21, 2017

@rsc: How is a "significant" change to the language defined? This is 100% backwards compatible and is a very minor API tweak.

What's the timeline for go2?

@griesemer

This comment has been minimized.

Show comment
Hide comment
@griesemer

griesemer Jun 21, 2017

Contributor

@rgooch: "significant" is anything that's more than an obvious bug fix (say, because compilers do something else than what the spec says) or clarification (compilers disagree, spec unclear). That is, anything that's an actual language change.

@rsc will talk about "The future of Go" at GopherCon in Denver (https://www.gophercon.com/schedule, Day 1, Main Stage). I you're not attending, all talks will be recorded; and I'm sure important things will be tweeted as well. That's probably a good talk to listen to regarding a future Go 2.

Contributor

griesemer commented Jun 21, 2017

@rgooch: "significant" is anything that's more than an obvious bug fix (say, because compilers do something else than what the spec says) or clarification (compilers disagree, spec unclear). That is, anything that's an actual language change.

@rsc will talk about "The future of Go" at GopherCon in Denver (https://www.gophercon.com/schedule, Day 1, Main Stage). I you're not attending, all talks will be recorded; and I'm sure important things will be tweeted as well. That's probably a good talk to listen to regarding a future Go 2.

@kjk

This comment has been minimized.

Show comment
Hide comment
@kjk

kjk Jun 21, 2017

@griesemer

https://www.youtube.com/watch?v=6GMkuPiIZ2k ?

https://github.com/golang/proposal/blob/master/design/18130-type-alias.md changed the syntax of the language and was accepted to 1.9.

This proposal asks for ch := make(chan bool, -1) to create infinite channel instead of current behavior of producing a compile time error or panicing at runtime.

If type aliases satisfy Go1 compatibility guidelines, then so does this proposal.

kjk commented Jun 21, 2017

@griesemer

https://www.youtube.com/watch?v=6GMkuPiIZ2k ?

https://github.com/golang/proposal/blob/master/design/18130-type-alias.md changed the syntax of the language and was accepted to 1.9.

This proposal asks for ch := make(chan bool, -1) to create infinite channel instead of current behavior of producing a compile time error or panicing at runtime.

If type aliases satisfy Go1 compatibility guidelines, then so does this proposal.

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Jun 21, 2017

A buffered channel could dynamically allocate its buffer, instead of malloc on make:

make(chan bool, -200)

A virtually infinite buffer:

make(chan bool, math.MinInt64)

The present alternative to the goroutine-plus-two-channels method is to select on every put and take evasive action in default. Lower mem cost, but higher cpu.

select {
  case ch <- i: // thank goodness
  default: // hm, push i to storage?
}

@kjk, Google engineers were asking for type aliases. Membership has its privileges :-)

networkimprov commented Jun 21, 2017

A buffered channel could dynamically allocate its buffer, instead of malloc on make:

make(chan bool, -200)

A virtually infinite buffer:

make(chan bool, math.MinInt64)

The present alternative to the goroutine-plus-two-channels method is to select on every put and take evasive action in default. Lower mem cost, but higher cpu.

select {
  case ch <- i: // thank goodness
  default: // hm, push i to storage?
}

@kjk, Google engineers were asking for type aliases. Membership has its privileges :-)

@griesemer

This comment has been minimized.

Show comment
Hide comment
@griesemer

griesemer Jun 21, 2017

Contributor

@kjk Thanks for the Pirates of the Caribbeans reference; much appreciated! The Spec however defines the plot, and thus is more than just a guideline... :-)

Type aliases are crucial for refactoring at scale and arguably an oversight in the original design (I've commented on that at length in the type alias discussion). They were discussed in excruciating detail (in fact it will have taken a year from initial discussion to actual release). The proposed feature here, while perhaps desirable, doesn't quite carry the same weight (at least I don't see the respective strong demand from the community).

Again, for reasons discussed elsewhere, we have stopped adding backward-compatible language changes, however small and compatible, for the time being so that they can be considered as a whole. If it's any consolation, there are several small, "obvious", and backwards-compatible language changes that were proposed by the Go Team (myself included), and we also postponed them just the same.

I believe Russ will discuss a plan for next steps at his GopherCon talk, and we will be looking for community input. No matter what, the tree is frozen for such changes for Go 1.9 anyway.

Contributor

griesemer commented Jun 21, 2017

@kjk Thanks for the Pirates of the Caribbeans reference; much appreciated! The Spec however defines the plot, and thus is more than just a guideline... :-)

Type aliases are crucial for refactoring at scale and arguably an oversight in the original design (I've commented on that at length in the type alias discussion). They were discussed in excruciating detail (in fact it will have taken a year from initial discussion to actual release). The proposed feature here, while perhaps desirable, doesn't quite carry the same weight (at least I don't see the respective strong demand from the community).

Again, for reasons discussed elsewhere, we have stopped adding backward-compatible language changes, however small and compatible, for the time being so that they can be considered as a whole. If it's any consolation, there are several small, "obvious", and backwards-compatible language changes that were proposed by the Go Team (myself included), and we also postponed them just the same.

I believe Russ will discuss a plan for next steps at his GopherCon talk, and we will be looking for community input. No matter what, the tree is frozen for such changes for Go 1.9 anyway.

@rsc

This comment has been minimized.

Show comment
Hide comment
@rsc

rsc Jun 21, 2017

Contributor

The limited capacity of channels is an important source of backpressure in a set of communicating goroutines. It is typically a mistake to use an unbounded channel, because you lose that backpressure. If one goroutine falls sufficiently behind, you usually want to take some action in response, not just queue its messages forever. The appropriate response varies by situation: maybe you want to drop messages, maybe you want to keep summary messages, maybe you want to take different responses as the goroutine falls further and further behind. Making it trivial to reach for unbounded channels keeps developers from thinking about this, which I believe is a strong disadvantage.

The point is not that we certainly shouldn't do this - I don't know - but only that the decision is more complex than it may seem at first glance. Yes, language changes right now must be backwards compatible with earlier versions of Go, but we're not going to take every backwards-compatible change. In fact, as I said before, we're not considering significant language changes (or in fact any language changes) today.

Contributor

rsc commented Jun 21, 2017

The limited capacity of channels is an important source of backpressure in a set of communicating goroutines. It is typically a mistake to use an unbounded channel, because you lose that backpressure. If one goroutine falls sufficiently behind, you usually want to take some action in response, not just queue its messages forever. The appropriate response varies by situation: maybe you want to drop messages, maybe you want to keep summary messages, maybe you want to take different responses as the goroutine falls further and further behind. Making it trivial to reach for unbounded channels keeps developers from thinking about this, which I believe is a strong disadvantage.

The point is not that we certainly shouldn't do this - I don't know - but only that the decision is more complex than it may seem at first glance. Yes, language changes right now must be backwards compatible with earlier versions of Go, but we're not going to take every backwards-compatible change. In fact, as I said before, we're not considering significant language changes (or in fact any language changes) today.

@rsc

This comment has been minimized.

Show comment
Hide comment
@rsc

rsc Jun 21, 2017

Contributor

@networkimprov, Type aliases did not happened because "Google engineers were asking for them". Rob, Robert, and I observed a recurring problem in managing large code bases and proposed a solution, to make Go more useful when scaling to large code bases, one of its explicit goals. We definitely did not communicate the motivation well enough in the initial alias proposal, and we tried to (and I think did) do better in the type alias proposal. For more details about the motivation, please see the article and videos linked at #18130.

As I said, we definitely did not communicate the motivation or criteria for significant language changes well enough in the handling of the original alias proposal. My upcoming Gophercon talk is in part an attempt to do that better. If you won't be at Gophercon, don't worry, I will publish a blog post shortly after the talk too.

Contributor

rsc commented Jun 21, 2017

@networkimprov, Type aliases did not happened because "Google engineers were asking for them". Rob, Robert, and I observed a recurring problem in managing large code bases and proposed a solution, to make Go more useful when scaling to large code bases, one of its explicit goals. We definitely did not communicate the motivation well enough in the initial alias proposal, and we tried to (and I think did) do better in the type alias proposal. For more details about the motivation, please see the article and videos linked at #18130.

As I said, we definitely did not communicate the motivation or criteria for significant language changes well enough in the handling of the original alias proposal. My upcoming Gophercon talk is in part an attempt to do that better. If you won't be at Gophercon, don't worry, I will publish a blog post shortly after the talk too.

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Jun 22, 2017

@rsc, it's fine to provide back-pressure if it can be detected efficiently; Posix has EWOULDBLOCK. From what I gather, select { ... default: } is not similarly inexpensive?

But the real problem with channel buffers isn't that they're not infinite, but that they're not dynamically allocated/allocable. This would seem to be easily fixed. One should be able to instantiate a large number of channels with sizeable buffers and only use some of them without incurring the buffer overhead for all of them.

networkimprov commented Jun 22, 2017

@rsc, it's fine to provide back-pressure if it can be detected efficiently; Posix has EWOULDBLOCK. From what I gather, select { ... default: } is not similarly inexpensive?

But the real problem with channel buffers isn't that they're not infinite, but that they're not dynamically allocated/allocable. This would seem to be easily fixed. One should be able to instantiate a large number of channels with sizeable buffers and only use some of them without incurring the buffer overhead for all of them.

@alercah

This comment has been minimized.

Show comment
Hide comment
@alercah

alercah Jun 22, 2017

I am inclined to agree with @rsc here on the subject of the proposal.

My first exposure to message passing style of concurrency was in Erlang, whose model of communication is similar to but not the same as Go:

  • Rather than channels being first-class objects, every process (Erlang process, not system process) has its own message queue.
  • Queues are unlimited.
  • Messages can be of any type.
  • When receiving a message, the receiving process can bind it via pattern matching, and it will grab the first message from the queue that matches. This effectively allows it to behave as if it has multiple separate by having, say, errors conform to a separate pattern so that they can be checked for independently.

Because queues are unlimited, a naively written piepline can behave quite poorly under load. The queue for the bottleneck process will grow without bound, and there is no easy way to resolve this (one article I found recommended simply putting your entire request pipeline into a single process!). In a worst case scenario, this causes crashes as the bottlenecked processes' queues grow without bound (note: Erlang allows multiple different nodes [VMs] running on multiple machines to all participate in one shared runtime and send native messages to one another; this can exacerbate the problem if the bottleneck is on a different machine from the ones sending since the sending machine is not constrained by the resources consumed byt the bottleneck)

By contrast, bounded channels provide very useful backpressure. When something gets overwhelmed, the entire pipeline grinds to a halt and stops processing more data. In cases where the feed into the pipeline might still operate, such as an HTTP server, it's still much easier to handle throttling. You can explicitly check for blocking with select and behave differently, such as by rejecting an incoming request with a 502 Unavaiable.

An unlimited buffer lets you do is move the bottleneck from a visibly stuck goroutine to an invisibly growing channel, and willl probably get misused in such a way that people do this very often when they really should not. Consider this: if your pipeline is generating input faster than it can output it, then you have a bug because the queue will grow indefinitely.

The only case I can think of where blocking is not desirable behaviour is if the messages are coming
(directly or indirectly) from an external source for which there are reasons to want to drain it as quickly as possible. For example, after executing a database lookup that reads a large number of rows which can be freed after they are read, it might be desired to move the rows to memory quickly so that the query can be released server-side. In this case, an unlimited channel buffer could store the rows. But i don't think a codespace solution to this problem is worth the benefit over a langspace one, especially given the way that this feature would become a trap for inexprienced users of the language.

alercah commented Jun 22, 2017

I am inclined to agree with @rsc here on the subject of the proposal.

My first exposure to message passing style of concurrency was in Erlang, whose model of communication is similar to but not the same as Go:

  • Rather than channels being first-class objects, every process (Erlang process, not system process) has its own message queue.
  • Queues are unlimited.
  • Messages can be of any type.
  • When receiving a message, the receiving process can bind it via pattern matching, and it will grab the first message from the queue that matches. This effectively allows it to behave as if it has multiple separate by having, say, errors conform to a separate pattern so that they can be checked for independently.

Because queues are unlimited, a naively written piepline can behave quite poorly under load. The queue for the bottleneck process will grow without bound, and there is no easy way to resolve this (one article I found recommended simply putting your entire request pipeline into a single process!). In a worst case scenario, this causes crashes as the bottlenecked processes' queues grow without bound (note: Erlang allows multiple different nodes [VMs] running on multiple machines to all participate in one shared runtime and send native messages to one another; this can exacerbate the problem if the bottleneck is on a different machine from the ones sending since the sending machine is not constrained by the resources consumed byt the bottleneck)

By contrast, bounded channels provide very useful backpressure. When something gets overwhelmed, the entire pipeline grinds to a halt and stops processing more data. In cases where the feed into the pipeline might still operate, such as an HTTP server, it's still much easier to handle throttling. You can explicitly check for blocking with select and behave differently, such as by rejecting an incoming request with a 502 Unavaiable.

An unlimited buffer lets you do is move the bottleneck from a visibly stuck goroutine to an invisibly growing channel, and willl probably get misused in such a way that people do this very often when they really should not. Consider this: if your pipeline is generating input faster than it can output it, then you have a bug because the queue will grow indefinitely.

The only case I can think of where blocking is not desirable behaviour is if the messages are coming
(directly or indirectly) from an external source for which there are reasons to want to drain it as quickly as possible. For example, after executing a database lookup that reads a large number of rows which can be freed after they are read, it might be desired to move the rows to memory quickly so that the query can be released server-side. In this case, an unlimited channel buffer could store the rows. But i don't think a codespace solution to this problem is worth the benefit over a langspace one, especially given the way that this feature would become a trap for inexprienced users of the language.

@rsc

This comment has been minimized.

Show comment
Hide comment
@rsc

rsc Jun 22, 2017

Contributor

... it's fine to provide back-pressure if it can be detected efficiently; Posix has EWOULDBLOCK. From what I gather, select { ... default: } is not similarly inexpensive?

Do you have measurements showing that?

A select with a single case and a default is a special-case fast path in the implementation that - in the case of falling into the default - doesn't even acquire a lock. It should be far cheaper than any system call that might return EWOULDBLOCK.

Contributor

rsc commented Jun 22, 2017

... it's fine to provide back-pressure if it can be detected efficiently; Posix has EWOULDBLOCK. From what I gather, select { ... default: } is not similarly inexpensive?

Do you have measurements showing that?

A select with a single case and a default is a special-case fast path in the implementation that - in the case of falling into the default - doesn't even acquire a lock. It should be far cheaper than any system call that might return EWOULDBLOCK.

@ianlancetaylor

This comment has been minimized.

Show comment
Hide comment
@ianlancetaylor

ianlancetaylor Jun 22, 2017

Contributor

I agree with @networkimprov . Let's change the implementation to dynamically grow large channel buffers as needed. I think that's a good idea anyhow. If a program is running so close to memory limits that it can't allocate a new page for a channel buffer, then it is already in trouble; having the buffer pre-allocated would not save it. If we make that implementation change, I see no need for this language change.

Contributor

ianlancetaylor commented Jun 22, 2017

I agree with @networkimprov . Let's change the implementation to dynamically grow large channel buffers as needed. I think that's a good idea anyhow. If a program is running so close to memory limits that it can't allocate a new page for a channel buffer, then it is already in trouble; having the buffer pre-allocated would not save it. If we make that implementation change, I see no need for this language change.

@rsc

This comment has been minimized.

Show comment
Hide comment
@rsc

rsc Jun 22, 2017

Contributor

Disagree - that's still effectively a language change, and we're not doing language changes today.

Contributor

rsc commented Jun 22, 2017

Disagree - that's still effectively a language change, and we're not doing language changes today.

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Jun 23, 2017

@rsc, @ianlancetaylor and I are not suggesting a language change (i.e. negative size in make(chan...)

Non-small channel buffers should never be malloc'd on make. There is no need to change the spec, just fix the memory allocation mistake, perhaps by applying the algorithm from append().

networkimprov commented Jun 23, 2017

@rsc, @ianlancetaylor and I are not suggesting a language change (i.e. negative size in make(chan...)

Non-small channel buffers should never be malloc'd on make. There is no need to change the spec, just fix the memory allocation mistake, perhaps by applying the algorithm from append().

@rsc

This comment has been minimized.

Show comment
Hide comment
@rsc

rsc Jul 6, 2017

Contributor

@AlekSi, thank you very much for the link to http://ferd.ca/queues-don-t-fix-overload.html!

Lazy allocation of channel buffers is a change that would be required in all implementations or else programs would run in some implementations but fail in others. That's the definition of a language change as I use the term. The language where make(chan int, 1e9) allocates nothing is qualitatively different from the language where it allocates 8 GB of memory.

Contributor

rsc commented Jul 6, 2017

@AlekSi, thank you very much for the link to http://ferd.ca/queues-don-t-fix-overload.html!

Lazy allocation of channel buffers is a change that would be required in all implementations or else programs would run in some implementations but fail in others. That's the definition of a language change as I use the term. The language where make(chan int, 1e9) allocates nothing is qualitatively different from the language where it allocates 8 GB of memory.

@alercah

This comment has been minimized.

Show comment
Hide comment
@alercah

alercah Jul 6, 2017

I'm not sure I agree with that being a language change. The Go spec is admittedly vague on the memory model so it is difficult to arrive at an objective conclusion about this, but in most languages, how the compiler or library allocates memory is generally considered an implementation detail. Memory is always going to be system-specific anyway.

For instance, if you were running on a system with a fixed amount of memory (avoiding issues of memory availability and overcommit varying by machine), the behaviour might still be different from compiler to compiler since perhaps one compiler GCs more aggressively and the program relies on this. Or perhaps one compiler chooses a less memory-efficient representation of some structure in exchange for another tradeoff, but this makes enough difference to trigger an OOM on one compiler but not the other. Or perhaps additional instrumentation like profiling causes additional allocations, creating failure points where none existed before.

I have seen nothing in Go documentation or the spec that implies that any operation, including a channel send, must be incapable of triggering an OOM panic. So if the implementation changed to allocate large channel buffers lazily, I would personally attribute that to just being one of the many vagaries of memory allocation in Go, rather than being a true language change.

alercah commented Jul 6, 2017

I'm not sure I agree with that being a language change. The Go spec is admittedly vague on the memory model so it is difficult to arrive at an objective conclusion about this, but in most languages, how the compiler or library allocates memory is generally considered an implementation detail. Memory is always going to be system-specific anyway.

For instance, if you were running on a system with a fixed amount of memory (avoiding issues of memory availability and overcommit varying by machine), the behaviour might still be different from compiler to compiler since perhaps one compiler GCs more aggressively and the program relies on this. Or perhaps one compiler chooses a less memory-efficient representation of some structure in exchange for another tradeoff, but this makes enough difference to trigger an OOM on one compiler but not the other. Or perhaps additional instrumentation like profiling causes additional allocations, creating failure points where none existed before.

I have seen nothing in Go documentation or the spec that implies that any operation, including a channel send, must be incapable of triggering an OOM panic. So if the implementation changed to allocate large channel buffers lazily, I would personally attribute that to just being one of the many vagaries of memory allocation in Go, rather than being a true language change.

@rsc

This comment has been minimized.

Show comment
Hide comment
@rsc

rsc Jul 6, 2017

Contributor

When I wrote "that's still effectively a language change, and we're not doing language changes today", I meant my definition of language change. You can argue that I meant something else, but I didn't.

The most productive way to move this conversation forward would be to document real production examples, including real code, where the lack of unlimited-capacity channels harms your ability to write or deploy or manage Go systems. Thanks.

Contributor

rsc commented Jul 6, 2017

When I wrote "that's still effectively a language change, and we're not doing language changes today", I meant my definition of language change. You can argue that I meant something else, but I didn't.

The most productive way to move this conversation forward would be to document real production examples, including real code, where the lack of unlimited-capacity channels harms your ability to write or deploy or manage Go systems. Thanks.

@ghasemloo

This comment has been minimized.

Show comment
Hide comment
@ghasemloo

ghasemloo Jul 11, 2017

I feel the ability to change the capacity of a chan during the runtime would be a better solution for the issue.

If a chan becomes full I think it is more flexible to check that the chan is full (e.g. using select) and then perform some actions based on that and then if appropriate explicitly increase the chan capacity rather than rely on chan capacity growing implicitly.

ghasemloo commented Jul 11, 2017

I feel the ability to change the capacity of a chan during the runtime would be a better solution for the issue.

If a chan becomes full I think it is more flexible to check that the chan is full (e.g. using select) and then perform some actions based on that and then if appropriate explicitly increase the chan capacity rather than rely on chan capacity growing implicitly.

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Jul 13, 2017

@networkimprov: your channel buffer pools proposal doesn't address my use case. I have just a couple of channels which are created when a burst stream starts and are closed when the burst finishes. It's unknowable ahead of time how long the burst will be, but the machine is sized so that any reasonable burst will fit.

On the topic of lazy allocation, I don't like APIs which do this when you've specified "I want N entries". Those APIs make it impossible to manage your memory consumption and pre-allocate so that you know (or have less risk) you won't run out of memory. This is why my proposal doesn't change existing behaviour: if you ask for a channel with N entries, you get exactly that many. Instead, it allows you to explicitly say "I don't know how many I'll need, so please allocate as needed".

@rsc: Here's a real production example: code to support adding objects in an objectserver, where new objects have to be stashed, sent upstream for hash collision detection and then committed locally: https://github.com/Symantec/Dominator/blob/master/objectserver/rpcd/lib/addObjectsWithMaster.go
See the ugly code where I have to create a pair of channels and a manager goroutine for each queue. I'd like to be able to eliminate this code with a simple ch := make(chan T, -1)

rgooch commented Jul 13, 2017

@networkimprov: your channel buffer pools proposal doesn't address my use case. I have just a couple of channels which are created when a burst stream starts and are closed when the burst finishes. It's unknowable ahead of time how long the burst will be, but the machine is sized so that any reasonable burst will fit.

On the topic of lazy allocation, I don't like APIs which do this when you've specified "I want N entries". Those APIs make it impossible to manage your memory consumption and pre-allocate so that you know (or have less risk) you won't run out of memory. This is why my proposal doesn't change existing behaviour: if you ask for a channel with N entries, you get exactly that many. Instead, it allows you to explicitly say "I don't know how many I'll need, so please allocate as needed".

@rsc: Here's a real production example: code to support adding objects in an objectserver, where new objects have to be stashed, sent upstream for hash collision detection and then committed locally: https://github.com/Symantec/Dominator/blob/master/objectserver/rpcd/lib/addObjectsWithMaster.go
See the ugly code where I have to create a pair of channels and a manager goroutine for each queue. I'd like to be able to eliminate this code with a simple ch := make(chan T, -1)

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Jul 13, 2017

Since you only need a couple burst-stream channels, the resource cost of the channel goroutine is negligible, and its code is pretty simple. (Looking at your linked source, I imagine you could unify the two new*Queue and manage*Queue functions. )

You'll only advance this proposal with a problem that cries out for a language solution. How much pain is your current solution causing?

networkimprov commented Jul 13, 2017

Since you only need a couple burst-stream channels, the resource cost of the channel goroutine is negligible, and its code is pretty simple. (Looking at your linked source, I imagine you could unify the two new*Queue and manage*Queue functions. )

You'll only advance this proposal with a problem that cries out for a language solution. How much pain is your current solution causing?

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Jul 13, 2017

I have code that makes the queues generic: https://github.com/Symantec/Dominator/tree/master/lib/queue
but that then requires runtime type casting which throws away the benefits of compile-time type checking.

As I said at the start of this thread, I have a solution, but it's a bit ugly. With a simple tweak to the API I (and anyone else who wants queues) can throw away a bunch of boilerplate. Russ asked for examples, I provided one. Others can chime in with their examples :-)

rgooch commented Jul 13, 2017

I have code that makes the queues generic: https://github.com/Symantec/Dominator/tree/master/lib/queue
but that then requires runtime type casting which throws away the benefits of compile-time type checking.

As I said at the start of this thread, I have a solution, but it's a bit ugly. With a simple tweak to the API I (and anyone else who wants queues) can throw away a bunch of boilerplate. Russ asked for examples, I provided one. Others can chime in with their examples :-)

@faiface

This comment has been minimized.

Show comment
Hide comment
@faiface

faiface Aug 5, 2017

Btw, not sure if it was mentioned earlier, but unlimited capacity channels are already possible, like this:

// make an unbuffered channel
ch := make(chan int)
// and here's how we send
go func() {
    ch <- 2
}()

faiface commented Aug 5, 2017

Btw, not sure if it was mentioned earlier, but unlimited capacity channels are already possible, like this:

// make an unbuffered channel
ch := make(chan int)
// and here's how we send
go func() {
    ch <- 2
}()
@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Aug 5, 2017

That's not an unlimited capacity channel. That's a work-around that allows you to buffer writes to a channel. Syntactically it's quite different. In my opening post I explain the problems with work-arounds.

rgooch commented Aug 5, 2017

That's not an unlimited capacity channel. That's a work-around that allows you to buffer writes to a channel. Syntactically it's quite different. In my opening post I explain the problems with work-arounds.

@faiface

This comment has been minimized.

Show comment
Hide comment
@faiface

faiface Aug 5, 2017

Well, your downsides are that it's a lot of code and it's not statically type-checked, none of which applies to what I wrote. Additionally, channels just happen to behave like queues, but they were never intended for implementing the queue data structure.

faiface commented Aug 5, 2017

Well, your downsides are that it's a lot of code and it's not statically type-checked, none of which applies to what I wrote. Additionally, channels just happen to behave like queues, but they were never intended for implementing the queue data structure.

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Aug 5, 2017

It seems then you've missed the point I was making. Yes, it's a lot of code, and yes, the generic version doesn't have static type checking. I originally stated that I didn't like these work-arounds and that the clean solution is to allow for creating unlimited capacity channels.

A specific disadvantage of your approach is that it creates a goroutine for each object you put on the queue. That costs far more memory, as the memory consumption of a goroutine is typically far greater than the size of an object.

rgooch commented Aug 5, 2017

It seems then you've missed the point I was making. Yes, it's a lot of code, and yes, the generic version doesn't have static type checking. I originally stated that I didn't like these work-arounds and that the clean solution is to allow for creating unlimited capacity channels.

A specific disadvantage of your approach is that it creates a goroutine for each object you put on the queue. That costs far more memory, as the memory consumption of a goroutine is typically far greater than the size of an object.

@faiface

This comment has been minimized.

Show comment
Hide comment
@faiface

faiface Aug 5, 2017

I understand that, but I still say, that channels are not for creating queues, they're for communicating between goroutines. Here's a simple and actually fast code to have a queue in Go:

var queue []T

// push
queue = append(queue, x)

// pop
x, queue = queue[0], queue[1:]

It seems like this code would cause too many allocations, but it's really fast from my experience (I use it for real-time audio processing).

faiface commented Aug 5, 2017

I understand that, but I still say, that channels are not for creating queues, they're for communicating between goroutines. Here's a simple and actually fast code to have a queue in Go:

var queue []T

// push
queue = append(queue, x)

// pop
x, queue = queue[0], queue[1:]

It seems like this code would cause too many allocations, but it's really fast from my experience (I use it for real-time audio processing).

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Aug 5, 2017

Well, I disagree about channels not being appropriate for creating queues.

Perhaps in your application the approach you outline is not too expensive. In my application, the queue would continue to grow until the stream burst is fully processed and the job terminates. With my work-around or with unlimited capacity channels (which I expect would be implemented using a linked list), the queue is only ever as large as the number of unconsumed objects. While the worst-case behaviour is similar, the typical behaviour is better with one of my approaches.

rgooch commented Aug 5, 2017

Well, I disagree about channels not being appropriate for creating queues.

Perhaps in your application the approach you outline is not too expensive. In my application, the queue would continue to grow until the stream burst is fully processed and the job terminates. With my work-around or with unlimited capacity channels (which I expect would be implemented using a linked list), the queue is only ever as large as the number of unconsumed objects. While the worst-case behaviour is similar, the typical behaviour is better with one of my approaches.

@faiface

This comment has been minimized.

Show comment
Hide comment
@faiface

faiface Aug 5, 2017

I implemented an optimized linked-list queue too (because I was worried that simply using a slice would be too slow), but it turned out not to be faster than the slice approach. I believe it could work for your use-case too, you never know the performance unless you try ;)

(EDIT: just in case of misunderstanding, the slice queue correctly garbage collects the popped elements after reaching capacity and growing, which happens quite often.)

faiface commented Aug 5, 2017

I implemented an optimized linked-list queue too (because I was worried that simply using a slice would be too slow), but it turned out not to be faster than the slice approach. I believe it could work for your use-case too, you never know the performance unless you try ;)

(EDIT: just in case of misunderstanding, the slice queue correctly garbage collects the popped elements after reaching capacity and growing, which happens quite often.)

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Aug 5, 2017

As I said at the start, I have a work-around that works for me. I have a generic package (using reflect) and I have concrete typed versions of the queue. It is memory efficient and the computational overhead of managing the (concrete typed) queue is negligible; event processing and network traffic dominate. My motivation at this point is code cleanliness and eliminating boilerplate. The approach you appear to actually be using is quite similar to the concrete typed queue I have, although with my approach I have real channels to send/receive to/from, so I can use them in select statements. I consider this preferable as it allows me to follow the preferred patterns in Go for event processing.

rgooch commented Aug 5, 2017

As I said at the start, I have a work-around that works for me. I have a generic package (using reflect) and I have concrete typed versions of the queue. It is memory efficient and the computational overhead of managing the (concrete typed) queue is negligible; event processing and network traffic dominate. My motivation at this point is code cleanliness and eliminating boilerplate. The approach you appear to actually be using is quite similar to the concrete typed queue I have, although with my approach I have real channels to send/receive to/from, so I can use them in select statements. I consider this preferable as it allows me to follow the preferred patterns in Go for event processing.

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Aug 5, 2017

@faiface

channels just happen to behave like queues, but they were never intended for implementing the queue data structure

A channel is a simple, thread-safe interface to a queue, and that is exactly what @rgooch coded. I don't agree that it's a lot of code, nor a workaround. However if you have a LOT of queues, this goroutine-plus-two-channels method is memory hungry relative to an ordinary channel with a relatively large static buffer. Hence proposal #20868.

networkimprov commented Aug 5, 2017

@faiface

channels just happen to behave like queues, but they were never intended for implementing the queue data structure

A channel is a simple, thread-safe interface to a queue, and that is exactly what @rgooch coded. I don't agree that it's a lot of code, nor a workaround. However if you have a LOT of queues, this goroutine-plus-two-channels method is memory hungry relative to an ordinary channel with a relatively large static buffer. Hence proposal #20868.

@swizzley

This comment has been minimized.

Show comment
Hide comment
@swizzley

swizzley Nov 9, 2017

I +1 this not because I want unlimited channel buffer length, but because I want unlimited size to the data passed through the channel. I'm currently at a point where I cannot add 1 more field to the JSON I'm passing through the channel, with a buffer length of 1, without a panic. This may be the wrong discussion for this, but it's the closest one I've found to addressing this issue.

fatal error: newproc: function arguments too large for new goroutine

swizzley commented Nov 9, 2017

I +1 this not because I want unlimited channel buffer length, but because I want unlimited size to the data passed through the channel. I'm currently at a point where I cannot add 1 more field to the JSON I'm passing through the channel, with a buffer length of 1, without a panic. This may be the wrong discussion for this, but it's the closest one I've found to addressing this issue.

fatal error: newproc: function arguments too large for new goroutine

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Nov 9, 2017

I want unlimited size to the data passed through the channel

There must be a reason you're not sending pointers to json strings on the channel?

networkimprov commented Nov 9, 2017

I want unlimited size to the data passed through the channel

There must be a reason you're not sending pointers to json strings on the channel?

@randall77

This comment has been minimized.

Show comment
Hide comment
@randall77

randall77 Nov 9, 2017

Contributor

@swizzley: Channel elements are restricted to 64KB. Looking at the code I see no real reason for this, we could change it to 4GB with no trouble.

That said, you probably don't want to be sending large items by value. As @networkimprov said, passing by pointer is much more efficient. Less copies, and less work to do while holding the channel lock.

@swizzley, If you'd like to pursue raising the max element size, please open a separate issue. Let's keep this one about number of elements, not their size.

Contributor

randall77 commented Nov 9, 2017

@swizzley: Channel elements are restricted to 64KB. Looking at the code I see no real reason for this, we could change it to 4GB with no trouble.

That said, you probably don't want to be sending large items by value. As @networkimprov said, passing by pointer is much more efficient. Less copies, and less work to do while holding the channel lock.

@swizzley, If you'd like to pursue raising the max element size, please open a separate issue. Let's keep this one about number of elements, not their size.

@ianlancetaylor

This comment has been minimized.

Show comment
Hide comment
@ianlancetaylor

ianlancetaylor Feb 13, 2018

Contributor

If we had generic types, unlimited channels could be implemented in a library with full type safety. A library would also make it possible to improve the implementation easily over time as we learn more. As various people said above, putting unlimited channels into the language seems like an attractive nuisance; most programs do need backpressure.

Closing on the assumption that we will get some type of generics. We can reopen if we decide that that is definitely not happening.

Contributor

ianlancetaylor commented Feb 13, 2018

If we had generic types, unlimited channels could be implemented in a library with full type safety. A library would also make it possible to improve the implementation easily over time as we learn more. As various people said above, putting unlimited channels into the language seems like an attractive nuisance; most programs do need backpressure.

Closing on the assumption that we will get some type of generics. We can reopen if we decide that that is definitely not happening.

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Feb 13, 2018

@ianlancetaylor "channels ... implemented in a library" implies a forthcoming mechanism to allow third-party channel implementations that are accessible with <- etc.

That would be a most welcome addition... is it true?

networkimprov commented Feb 13, 2018

@ianlancetaylor "channels ... implemented in a library" implies a forthcoming mechanism to allow third-party channel implementations that are accessible with <- etc.

That would be a most welcome addition... is it true?

@ianlancetaylor

This comment has been minimized.

Show comment
Hide comment
@ianlancetaylor

ianlancetaylor Feb 14, 2018

Contributor

@networkimprov I think that kind of feature, which I usually call operator methods a la C++, would be fairly unlikely. I'm not aware of any current proposals for that.

Contributor

ianlancetaylor commented Feb 14, 2018

@networkimprov I think that kind of feature, which I usually call operator methods a la C++, would be fairly unlikely. I'm not aware of any current proposals for that.

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Feb 14, 2018

Then how could a library, even given generics, "implement" unlimited channels? It would either have to reinvent the channel API and supporting runtime mechanism, or simply encapsulate the expensive a-goroutine-plus-two-channels scheme discussed above.

A scheme to allow third-party channel implementations would not look like C++ operator overloading per se. Such a channel implementation would have to provide a specific set of methods, as we do for an interface.

networkimprov commented Feb 14, 2018

Then how could a library, even given generics, "implement" unlimited channels? It would either have to reinvent the channel API and supporting runtime mechanism, or simply encapsulate the expensive a-goroutine-plus-two-channels scheme discussed above.

A scheme to allow third-party channel implementations would not look like C++ operator overloading per se. Such a channel implementation would have to provide a specific set of methods, as we do for an interface.

@griesemer

This comment has been minimized.

Show comment
Hide comment
@griesemer

griesemer Feb 14, 2018

Contributor

@networkimprov It seems to me that any system that needs to handle a lot of incoming messages to a channel faster than they can be processed directly would need a dedicated goroutine to drain that channel quickly; and possibly store the data for further processing (e.g., sending to a slower channel). Using an unlimited channel for this scenario is simply shifting the problem elsewhere (to the channel's implementation in the runtime). I'm not convinced that making the channel implementation more complex for this (I suspect) rare scenario is justified. It seems better to handle that via a dedicated library. If there is a form of genericity, that library can also be type-safe. I don't see why there's a need for operator overloading.

Such a library should do well in cases of very fast, "bursty" messages. A large enough buffered channel should be able to absorb bursts while a fast dedicated goroutine drains the channel into a ring buffer from which the messages are delivered at a slower pace to the final consumer of the messages. That ring buffer will need to be efficiently implemented, and will need to be able to grow efficiently (irrespective of size) and that will require some careful engineering. Better to leave that code to a library that can be tuned as needed than baking it into the runtime (and then possibly being at the mercy of release cycles).

If you disagree, it would be useful for us (and everybody else) to have a concrete scenario (experience report) showing how such an approach is not sufficient.

Contributor

griesemer commented Feb 14, 2018

@networkimprov It seems to me that any system that needs to handle a lot of incoming messages to a channel faster than they can be processed directly would need a dedicated goroutine to drain that channel quickly; and possibly store the data for further processing (e.g., sending to a slower channel). Using an unlimited channel for this scenario is simply shifting the problem elsewhere (to the channel's implementation in the runtime). I'm not convinced that making the channel implementation more complex for this (I suspect) rare scenario is justified. It seems better to handle that via a dedicated library. If there is a form of genericity, that library can also be type-safe. I don't see why there's a need for operator overloading.

Such a library should do well in cases of very fast, "bursty" messages. A large enough buffered channel should be able to absorb bursts while a fast dedicated goroutine drains the channel into a ring buffer from which the messages are delivered at a slower pace to the final consumer of the messages. That ring buffer will need to be efficiently implemented, and will need to be able to grow efficiently (irrespective of size) and that will require some careful engineering. Better to leave that code to a library that can be tuned as needed than baking it into the runtime (and then possibly being at the mercy of release cycles).

If you disagree, it would be useful for us (and everybody else) to have a concrete scenario (experience report) showing how such an approach is not sufficient.

@networkimprov

This comment has been minimized.

Show comment
Hide comment
@networkimprov

networkimprov Feb 14, 2018

@griesemer I'm not actually a proponent of unlimited channels. The use case that concerns me is a large group of channels, any of which can see heavy traffic for limited periods. For that, I proposed #20868.

Re third-party channel implementations, which we might call "channel plugins," they would be useful where you wish to select on a non-channel I/O source along with some channels. However one can accomplish this today with a goroutine that links the I/O with a channel, and I assume that's not very expensive since no one has proposed channel plugins yet :-)

BTW, my great gratitude to you and your colleagues for this wonderful language! <3

networkimprov commented Feb 14, 2018

@griesemer I'm not actually a proponent of unlimited channels. The use case that concerns me is a large group of channels, any of which can see heavy traffic for limited periods. For that, I proposed #20868.

Re third-party channel implementations, which we might call "channel plugins," they would be useful where you wish to select on a non-channel I/O source along with some channels. However one can accomplish this today with a goroutine that links the I/O with a channel, and I assume that's not very expensive since no one has proposed channel plugins yet :-)

BTW, my great gratitude to you and your colleagues for this wonderful language! <3

@ghasemloo

This comment has been minimized.

Show comment
Hide comment
@ghasemloo

ghasemloo Feb 27, 2018

For queuing this might be of interest:
https://apenwarr.ca/log/?m=201708#14

ghasemloo commented Feb 27, 2018

For queuing this might be of interest:
https://apenwarr.ca/log/?m=201708#14

@rgooch

This comment has been minimized.

Show comment
Hide comment
@rgooch

rgooch Mar 5, 2018

That's not really relevant to the use-case that I described at the start of this thread.

rgooch commented Mar 5, 2018

That's not really relevant to the use-case that I described at the start of this thread.

@magiceye

This comment has been minimized.

Show comment
Hide comment
@magiceye

magiceye Mar 12, 2018

Like supply async io operations, an unlimited channel also contradict with go keyword. Limited buffer channel is OK due to user can create as many coroutines as he wants. Writing channel was blocked is no harm to the entire process, cpu resource will soon consumed by other coroutines without incur switch performance impact.

If you want async io or unlimited channel, may be you should consider change to another language. Doing this make go key feature(write code in sync way but has same effect of async compare to other language which has no language level coroutine) meaningless.

magiceye commented Mar 12, 2018

Like supply async io operations, an unlimited channel also contradict with go keyword. Limited buffer channel is OK due to user can create as many coroutines as he wants. Writing channel was blocked is no harm to the entire process, cpu resource will soon consumed by other coroutines without incur switch performance impact.

If you want async io or unlimited channel, may be you should consider change to another language. Doing this make go key feature(write code in sync way but has same effect of async compare to other language which has no language level coroutine) meaningless.

@golang golang locked as resolved and limited conversation to collaborators Mar 12, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.