Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: spec: add channel rclose #44407

Open
josharian opened this issue Feb 19, 2021 · 22 comments
Open

proposal: spec: add channel rclose #44407

josharian opened this issue Feb 19, 2021 · 22 comments
Labels
Projects
Milestone

Comments

@josharian
Copy link
Contributor

@josharian josharian commented Feb 19, 2021

Abstract

Go's built-in close function is designed for sender-centric channels. close means "no more sends are coming".

Some problems are well-solved by designs with receiver-centric channels, such as a work queue that serializes or coordinates access to a resource.

I propose the addition of a new built-in rclose function. rclose means "no more receives are coming".

Background

Work queues are a common way to coordinate access to a resource. Producers of work send work on a channel, and the worker(s) receive from the channel and do the work.

func worker(c chan T) {
    for work := range c {
        process(work)
    }
}

func produceWork(c chan T, work T) {
    c <- work
}

What happens when we want to close the resource and associated worker(s)?

We have several options now, none of them great.

  • Request that all producers shut down. Block until all producers have stopped. This introduces extra coupling and coordination between parts of the system. Tight coupling makes such systems harder to reason about, harder to test, harder to extend, and easier to deadlock.
  • Add synchronization around every send on the channel to ensure that the resource is still open. This introduces extra contention around sends. And it seems strange to have to add synchronization around a synchronized resource.
  • Use a select for every send on the channel, with one of the other cases proceeding whenever the resource is closed. Selects are expensive and make code harder to reason about.

I believe that it is this problem, or some variant of it, that has led to a long history of people asking for changes to the semantics of close: that close be idempotent, that sends on a closed channel not panic, that there be some way to detect whether a channel is closed, and so on.

Those requests for changes are met with the answer: close means "no more sends are coming".

I propose a different answer, that Go also provide a receiver-oriented close, or rclose.

Proposal

Parallelling the current spec for close, I propose:

For a channel c, the built-in function rclose(c) records that no more values will be received from the channel. It is an error if c is a send-only channel. Receiving from or rclosing an rclosed channel causes a run-time panic. Closing the nil channel also causes a run-time panic. After calling rclose, any buffered values are discarded, and any further send operations will proceed without blocking by discarding the sent value.

c := make(chan int, 2)
c <- 1
rclose(c) // the value 1 is discarded
c <- 2    // the send succeeds, discarding the value
<-c       // panics
rclose(c) // panics

I further propose that we add a send expression analogous to the multi-valued receive expression. The send expression returns a single untyped boolean value that indicates whether the channel is rclosed.

c := make(chan int, 1)
closed := c <- 1 // closed == false
rclose(c)
closed = c <- 1 // closed == true

The addition of a send expression will require scattered changes to the spec, but I believe that they are all straightforward.

To the Send statements, section of the spec, I propose we insert the sentence:

A send on an rclosed channel can always proceed immediately, discarding the value.

To the Receive operator, section of the spec, I propose we insert the sentence:

A receive on an rclosed channel proceeds by causing a run-time panic.

One more detail remains. What happens when you rclose a closed channel or close an rclosed channel? I propose that both panic. They both represent a confusion over who is in charge of the lifetime of the channel, the sender or the receiver.

Rationale

rclose makes managing our shutdown straightforward. We ensure that all workers have stopped. We rclose all channels. Producers may detect that the resource is now unavailable by using a send expression. Or they may be signalled to close in some other way and allowed to close on their own schedule; stragglers may write into the work channel without harm and without synchronization.

The behavior is entirely analogous to close, but geared for a different program design.

Compatibility

This change is fully backwards compatible.

Language complexity impact

rclose is a bit weird. However, the strong analogy to close should make it easy to teach and learn about. In every case, "what will rclose do?" is answered "the same as close, but swapping send and receive".

Implementation

I am sketchy on many of the details here. At a minimum this will impact:

  • the spec: add all relevant concepts
  • the compiler: add rclose, add send expressions, adjust channel operations and optimizations
  • the runtime: add a bit to hchan to represent whether a channel is rclosed, adjust all channel operations
  • go/* packages: add send expressions, add typechecker support
  • docs: the tour, all educational resources

Other

Thanks to @bcmills for sparking the idea and helping to clarify it.

cc @zx2c4 who will recognize here a way to avoid the use of runtime.SetFinalizer in wireguard-go

@gopherbot gopherbot added this to the Proposal milestone Feb 19, 2021
@ianlancetaylor ianlancetaylor added this to Incoming in Proposals Feb 19, 2021
@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Feb 19, 2021

Interesting idea. I note that a natural way to use this will be

    c := make(chan error)
    go func() {
        defer close(c)
        for w := work() {
             if !c <- w() {
                return
            }
        }
    }()
    defer rclose(c)
    for err := range c {
        if err != nil && stopImmediately(err) {
            return err
        }
    }

This seems fairly natural to me but it means that the channel will be passed to both close and rclose. So I'm not persuaded that that case should panic.

@peterbourgon
Copy link
Member

@peterbourgon peterbourgon commented Feb 19, 2021

Channels are simple primitives: one-way conduits of information by design. That intuitive constraint is both reflected in and reinforced by the fact that only their senders can close them. This proposal makes some already-possible things easier, at the expense of subverting an essential property of a core language primitive. IMO the costs far outweigh the benefit.

edit: removed unnecessary extra stuff

@mvdan
Copy link
Member

@mvdan mvdan commented Feb 19, 2021

I fully agree that writing receiver-centric channels right now is painful. I think I've had to debug deadlocks or other concurrency bugs every time I've tried to use channels that way.

That said, I worry that channels are already hard to grasp today, and that this change would only make them trickier to fully understand. I imagine that's what @peterbourgon means above.

They both represent a confusion over who is in charge of the lifetime of the channel, the sender or the receiver.

Thinking outloud, I wonder if this could be a property of the channel itself, similar to how a channel can be receive-only or send-only. Perhaps forbid close on a receive-only channel and rclose on a send-only channel via vet?

Another, more aggressive change could be to specify at make time who is in charge of closing the channel - the senders, or the receivers. An example, and please ignore the syntax and typesystem changes as I haven't given them much thought:

c1 := make(chan int, 1) // only writers can close it, via 'close'
c2 := make(rchan int, 1) // only readers can close it, via 'rclose'
@zx2c4
Copy link
Contributor

@zx2c4 zx2c4 commented Feb 19, 2021

cc @zx2c4 who will recognize here a way to avoid the use of runtime.SetFinalizer in wireguard-go

I'm trying to see in more detail how this actually is supposed to replace your SetFinalizer trick in wireguard-go. You came up with this so of course you know how it works, but for the others in this thread, I'll first describe this trick.

An AutodrainingQueue is one that, upon GC, gives all of its items back to a sync.Pool, after waiting to acquire some lock:

type AutodrainingQueue struct {
	c chan *Element
}

func (device *Device) NewAutodrainingInboundQueue *AutodrainingQueue {
	q := &AutodrainingQueue{
		c: make(chan *Element, QueueSize),
	}
	runtime.SetFinalizer(q, device.flushQueue)
	return q
}

func (device *Device) flushQueue(q *AutodrainingQueue) {
	for {
		select {
		case elem := <-q.c:
			elem.mutex.Lock()
			device.bufferPool.Put(elem.buffer)
			device.elementPool.Put(elem)
		default:
			return
		}
	}
}

A worker does things with this autodraining pool, using a range statement, and terminates when it gets a nil item:

func (peer *Peer) SomeGoRoutineWorker() {
	for elem := range peer.autodrainingQueue.c {
		if elem == nil {
			return
		}
		// Wait for the lock
		elem.mutex.Lock()
		
		// Do some expensive piece of work
		// ...
		
		// Cleanup
		peer.device.bufferPool.Put(elem.buffer)
		peer.device.elementPool.Put(elem)
	}
}

Then, in order to actually use the auto-draining part, a shutdown routine looks like:

func (peer *Peer) ShutItAllDown() {
	peer.autodrainingQueue.c <- nil
	peer.autodrainingQueue.c = nil
}

The problem with applying rclose() to the above scenario is that, while it may make it marginally easier to shut down the receiver thread, it means that all of the existing items in there are orphaned, with no way of recovering them in order to cleanup, take locks, return to pools, and so forth. It makes the contents of the channel inaccessible.

It seems like the conclusion one might draw from that exercise is that while the rule of thumb for close() is "the sender should always be the one calling close()", the rule of thumb for rclose() is "the receiver should always be the one calling rclose()".

But if we try to apply that rule of thumb to this scenario, we quickly run into more problems. Because we need a draining step, then it means the receiver must do the draining. That means we still need the nil terminator sentinel. So, the loop changes to something like this:

func (peer *Peer) SomeGoRoutineWorker() {
	defer func() {
		for {
			select {
			case elem := <-q.c:
				elem.mutex.Lock()
				device.bufferPool.Put(elem.buffer)
				device.elementPool.Put(elem)
			default:
				rclose(q.c)
				return
			}
		}
	}()
	for elem := range peer.autodrainingQueue.c {
		if elem == nil {
			return
		}
		// Wait for the lock
		elem.mutex.Lock()
		
		// Do some expensive piece of work
		// ...
		
		// Cleanup
		peer.device.bufferPool.Put(elem.buffer)
		peer.device.elementPool.Put(elem)
	}
}

Now the draining part that used to be in the SetFinalizer function fires before the reader returns and before this reader calls rclose() (following the new rule of thumb).

That would seem to "solve" the problem I raised above with not being able to recover the rclose()'d elements, but now it introduces yet another problem: all the while we're trying to drain the pool, senders can still write into the channel, potentially keeping that reader draining the queue indefinitely. This happens because the reader must only call rclose() when its done with all possible reads, which includes the reads that happen while draining.

So I don't quite see how rclose() helps replace your SetFinalizer trick. Maybe you or others here have ideas for new paradigms and patterns that rclose() can unlock that would help with this.

On the other hand, a change to your proposal that would make this situation a million times easier and more useful (for this use case, at least) would be if rclose() still allowed reads on old items, but didn't allow writes on new items. Put that way, maybe it'd be better called wclose() or shutdown() or something more relaxed, because at that point, we're basically talking about the semantics of the existing close() function, except rather than panic on write it'd just return a boolean on read/write success, per this spec. Or, rather than introduce a new keyword and new closing semantics, writing into a channel could be made to return a boolean rather than panicking when it's used in the new syntax construct you suggested -- didSend := c <- e. That sounds a lot simpler than what you're proposing, and potentially more useful per the example above.

(This is actually already sort of possible with evil tricks like:)

func WriteToChan(c chan int, v int) (didWrite bool) {
	defer func() {
		if e := recover(); e != nil {
			err, isRuntimeError := e.(interface {
				error
				RuntimeError()
			})
			if isRuntimeError && err.Error() == "send on closed channel" {
				return
			}
			panic(e)
		}
	}()
	c <- v
	return true
}

(Which can be made generic: https://go2goplay.golang.org/p/snp2yxoDouY )

@bcmills
Copy link
Member

@bcmills bcmills commented Feb 19, 2021

One more detail remains. What happens when you rclose a closed channel or close an rclosed channel?

close and rclose represent two orthogonal invariants in the code: one about senders, and one about receivers. So they should be orthogonal: you can close an rclosed channel (just as you can send to one), and you can rclose a closed channel (just as you can receive from one).

@bcmills
Copy link
Member

@bcmills bcmills commented Feb 19, 2021

We have several options now, none of them great.

  • Request that all producers shut down. Block until all producers have stopped. This introduces extra coupling and coordination between parts of the system. Tight coupling makes such systems harder to reason about, harder to test, harder to extend, and easier to deadlock.

FWIW, in my experience with concurrent Go code I have pretty much always found this to be the correct approach, and I don't see how the presence of rclose would change that.

If we block until all producers have stopped, then we can test the API without interference: at the end of the test, we shut down the API and wait for the producers and consumers to finish. Then we know that those goroutines will not interfere with the next test or benchmark — they won't consume unexpected cycles, won't panic in a way that attributes the failure to the wrong test case, and won't even be around in the goroutine traces if the test fails.

With that approach we can also check for leaked goroutines: before we start a new set of producers and consumers for the next test, we can dump the goroutines in the process and either use them as a baseline for comparison or ensure that none of the running goroutines are our producers or consumers. If there is a leak, the goroutines will still be around, and if there is no leak then they will already be gone.

However, if the API relies on rclose, we cannot write tests that do not interfere: because we didn't block until the producers stopped, we have no bound on how long they will continue to exist. The best our leak-test can do is to keep looping (and keep checking for leaks) until the test hits its deadline, at which point the user will have to decide whether the bug was due to a leak or an inappropriate deadline.

This same problem occurs in production code: if the API blocks until producers have stopped, then the caller can set firm bounds on its live-memory footprint. After the producers have stopped, their memory can be garbage-collected ~immediately, so O(N) sequential calls result in an O(1) memory footprint, not O(N). On the other hand, if the producers are allowed to persist indefinitely, the overall memory footprint can scale as high as O(N) (depending on the scheduler and the garbage collector).

@bcmills
Copy link
Member

@bcmills bcmills commented Feb 19, 2021

In summary: I think rclose is coherent as a language feature, but I'm not sure when I would ever actually want to use or recommend it.

I think we already have too many patterns for users to consider when writing concurrent Go code. They already have to decide among 1-buffered mutex-channels and N-buffered semaphore-channels (my preferred approach), state-owning goroutines and N-worker pools, sync.Mutex and sync.Cond, etc..

Since the best practices with what we have so far are still evolving, I would rather not throw another primitive operation into the mix at this point.

@mvdan
Copy link
Member

@mvdan mvdan commented Feb 19, 2021

@bcmills are you collecting those concurrency best practices, including good patterns for using channels, somewhere that's canonical and maintained? I've seen talks and occasional posts, but nothing that I can reliably point people towards.

There's https://golang.org/doc/effective_go#concurrency, but it seems a bit short and the page might be frozen altogether. It has barely had a dozen minor updates in the past three years, so it's clearly not evolving significantly.

I know this is somewhat off-topic for this proposal, but if we decide to reject it in favor of "continue evolving the best practices before we add more primitives", I argue that we should be documenting those best practices very well as they evolve.

@bcmills
Copy link
Member

@bcmills bcmills commented Feb 19, 2021

@mvdan, I haven't published anything since my GC18 talk, athough @empijei did some good blog posts on the more modern idioms: https://blogtitle.github.io/categories/concurrency/

(But I don't know of any “canonical” or “maintained” repository of best-practices for concurrency, no.)

@josharian
Copy link
Contributor Author

@josharian josharian commented Feb 19, 2021

@ianlancetaylor wrote:

it means that the channel will be passed to both close and rclose. So I'm not persuaded that that case should panic.

@bcmills wrote:

close and rclose represent two orthogonal invariants in the code: one about senders, and one about receivers. So they should be orthogonal.

Both of these seem plausible, but allowing a channel to be both closed and rclosed raises a question that I'm unsure how to answer. The spec would contain both of these statements:

A send on a closed channel proceeds by causing a run-time panic.

A send on an rclosed channel can always proceed immediately, discarding the value.

What should a send on a closed and rclosed channel do? (Ditto for receive.)

@josharian
Copy link
Contributor Author

@josharian josharian commented Feb 19, 2021

To summarize the concern from @zx2c4, I proposed:

After calling rclose, any buffered values are discarded, and any further send operations will proceed without blocking by discarding the sent value.

But discarded any buffered values means that per-value invariants cannot be easily preserved. (The send expression allows preserving invariants for later send operations.)

One possibility is for rclose to return a []T containing all the buffered values:

c := make(chan int, 2)
c <- 1
c <- 2
s := rclose(c) // s == []int{1, 2}
@bcmills
Copy link
Member

@bcmills bcmills commented Feb 19, 2021

allowing a channel to be both closed and rclosed raises a question that I'm unsure how to answer. The spec would contain both of these statements:

A send on a closed channel proceeds by causing a run-time panic.

A send on an rclosed channel can always proceed immediately, discarding the value.

What should a send on a closed and rclosed channel do? (Ditto for receive.)

A send on a closed channel should panic regardless of whether it is rclosed. If you promise that there will be no more sends and then attempt to send anyway, you have broken that promise regardless of whether there are any receivers around to observe it.

A receive on an rclosed channel should panic regardless of whether it is closed, for the same reason: if you promise that you won't receive any further, then you must not attempt a further receive after that point.

(Both of these panics detect broken invariants in the program, and the invariant is broken regardless of what other invariants exist.)

@josharian
Copy link
Contributor Author

@josharian josharian commented Feb 19, 2021

Request that all producers shut down. Block until all producers have stopped.

FWIW, in my experience with concurrent Go code I have pretty much always found this to be the correct approach

I guess I respectfully disagree right now. I appreciate the reasons you offered for the upsides of this tight coupling.

One downside is an increase in deadlocks. Maybe we just need better examples to work from, or documentation, or better libraries and abstractions to use (hello, generics). But experienced folk do get tripped up.

Another downside is that the work queue must be aware of all producers, in order to be able to coordinate shutdown. Either this means hard-coding them or having some kind of registration system with callbacks, which is not the nicest API to code against. (And lends itself to deadlocks.)

Requiring tight coupling makes it hard to retrofit in a work queue without changing an existing API.

As a general engineering/systems principle, systems that are tightly coupled tend not to degrade gracefully. One could argue that that is beneficial, because you can test for and detect those failures early. (I believe you did argue that, but I'm being careful not to put words in your mouth.) But I'm generally pessimistic about our ability to detect all failures, and would prefer graceful degradation when available.

I am concerned that this is the kind of conversation that's difficult to have over text, though, so I don't anticipate pushing back much more on this front.

@bcmills
Copy link
Member

@bcmills bcmills commented Feb 19, 2021

As a general engineering/systems principle, systems that are tightly coupled tend not to degrade gracefully. One could argue that that is beneficial, because you can test for and detect those failures early. (I believe you did argue that, but I'm being careful not to put words in your mouth.)

I think you interpreted my point as I intended. 🙂 (I do believe that in general it is better to design systems to fail visibly than to degrade gracefully, precisely because you can detect and fix those failures.)

At the very least, I think it's important for software (especially command-line and server software) operating in a degraded mode to visibly report any problems that result in degraded operation, and I don't see a way for a program that relies on rclose to detect and report unexpected latency or leaks in the shutdown paths. I suspect that a mechanism for that sort of reporting would need a much more complex API, and would thus be better structured as a generic library than a primitive channel operation.

@zx2c4
Copy link
Contributor

@zx2c4 zx2c4 commented Feb 19, 2021

Any thoughts on breaking this proposal out into a new one, which adds the:

didWrite := c <- v

syntax on this PR, in order to determine if c is closed? The idea would be:

	c := make(chan int, 20)
	didWrite := c <- 8
	fmt.Println(didWrite) // true
	close(c)
	didWrite = c <- 9
	fmt.Println(didWrite) // false

As I pointed out in #44407 (comment) , this is already possible with ugly hacks: https://go2goplay.golang.org/p/snp2yxoDouY . But this would provide a performant way to do that, using the elegant syntax suggestion of this proposal.

I think this would give most of the benefits of rclose() without the drawbacks I mentioned, and without adding too much additional complexity to the language.

@DmitriyMV
Copy link

@DmitriyMV DmitriyMV commented Feb 19, 2021

FWIW I think

didWrite := c <- v

would be much more consistent with current behavior.

As much, as I don't like writing wrappers around channels, the semantics around rclose look weird for me. Since writer has no way of knowing that channel was "read" closed, it would introduce additional check before closing channel in writer goroutine.
Disregard this, I missed the part about

c := make(chan int, 1)
closed := c <- 1 // closed == false
rclose(c)
closed = c <- 1 // closed == true

Still - can't we just add "checked write"?

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Feb 19, 2021

A while back, before Go 1, we used to permit code like didWrite := c <- 1. The spec change removing that was https://golang.org/cl/4013045. But it didn't test whether the channel was closed, it just did a non-blocking send, equivalent to using a select statement with a default clause.

I agree that if we were to adopt rclose it would make sense for the sender to have some way to test whether the channel was r-closed. But I don't think that argument applies to close. The sender should always know whether a channel has been closed. Otherwise there is confusion on the sending side.

@zx2c4
Copy link
Contributor

@zx2c4 zx2c4 commented Feb 19, 2021

I agree that if we were to adopt rclose it would make sense for the sender to have some way to test whether the channel was r-closed. But I don't think that argument applies to close. The sender should always know whether a channel has been closed. Otherwise there is confusion on the sending side.

Not quite... With the proposed syntax for seeing atomically if the channel is closed, a sender must not always know about when the channel is already closed, enabling the channel to be closed by the receiver! That leads to nice designs without strong coupling. Building on #44407 (comment) , here's what that amounts to:

The sender (multiple in multiple goroutines) tries to put an item, and if it's already been closed, it cleans up the element:

func (peer *Peer) SendIt(elem *Element) {
	if ! peer.c <- elem {
		peer.device.bufferPool.Put(elem.buffer)
		peer.device.elementPool.Put(elem)
	}
}

The receiver then processes everything in it's queue:

func (peer *Peer) SomeGoRoutineWorker() {
	for elem := range peer.c {
		// Wait for the lock
		elem.mutex.Lock()
		
		// Do some expensive piece of work
		// ...
		
		// Cleanup
		peer.device.bufferPool.Put(elem.buffer)
		peer.device.elementPool.Put(elem)
	}
}

And finally some control event, elsewhere, can then shutdown this entire thing with a simple call to close, with no panics:

func (peer *Peer) ShutItDown() {
	close(peer.c)
}

Alternatively, if it's desirable to never do more work than necessary, the shutdown can write a sentinel value before shutting down:

func (peer *Peer) ShutItDown() {
	peer.c <- nil
	close(peer.c)
}

The receiver then drains when it hits the sentinel:

func (peer *Peer) SomeGoRoutineWorker() {
	defer func() {
		for elem := range peer.c {
			peer.device.bufferPool.Put(elem.buffer)
			peer.device.elementPool.Put(elem)
		}
	}()
	for elem := range peer.c {
		if elem == nil {
			return
		}
		// Wait for the lock
		elem.mutex.Lock()
		
		// Do some expensive piece of work
		// ...
		
		// Cleanup
		peer.device.bufferPool.Put(elem.buffer)
		peer.device.elementPool.Put(elem)
	}
}

Or in the sentinel case, that close(peer.c) could be moved from the shutdown function to the worker function, without hazard:

func (peer *Peer) ShutItDown() {
	peer.c <- nil
}

The receiver then drains when it hits the sentinel:

func (peer *Peer) SomeGoRoutineWorker() {
	defer func() {
		for elem := range peer.c {
			peer.device.bufferPool.Put(elem.buffer)
			peer.device.elementPool.Put(elem)
		}
	}()
	for elem := range peer.c {
		if elem == nil {
			close(peer.c)
			return
		}
		// Wait for the lock
		elem.mutex.Lock()
		
		// Do some expensive piece of work
		// ...
		
		// Cleanup
		peer.device.bufferPool.Put(elem.buffer)
		peer.device.elementPool.Put(elem)
	}
}

The fact that this simple addition to sending to closed channels -- returning false when they're already closed -- opens multiple ways to safely solve a problem that before was extremely verbose and difficult is a telling sign. Basically, channel hazards disappear this way.

@peterbourgon
Copy link
Member

@peterbourgon peterbourgon commented Feb 24, 2021

If a chan is a field of a struct, and it's senders and receivers are methods of that struct, then it seems to me that everything is already totally coupled, and shutdown can already be initiated from any direction with more-or-less equivalent verbosity and complexity. Or am I missing something in this example?

@peterbourgon
Copy link
Member

@peterbourgon peterbourgon commented Feb 24, 2021

@josharian

Another downside is that the work queue must be aware of all producers, in order to be able to coordinate shutdown.

I think it just means that channels are to low-level to serve this use case directly. That's fine! Wrapping a channel in a type that provides an API supporting arbitrary producers is straightforward.

@DeedleFake
Copy link

@DeedleFake DeedleFake commented Feb 24, 2021

Wrapping a channel in a type that provides an API supporting arbitrary producers is straightforward.

And will hopefully become more straightforward with generics, although it will have some limitations without the ability to hook into select.

@josharian
Copy link
Contributor Author

@josharian josharian commented Feb 24, 2021

Wrapping a channel in a type that provides an API supporting arbitrary producers is straightforward.

...if you either hold a mutex during the channel send (source of contention, risks deadlocks, doubles scheduler interaction when channel is full) or send as part of a select statement (expensive).

See the Background section of the proposal, beginning: "We have several options now..."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Proposals
Incoming
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
9 participants