Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: context: add Merge #36503

navytux opened this issue Jan 10, 2020 · 22 comments

proposal: context: add Merge #36503

navytux opened this issue Jan 10, 2020 · 22 comments


Copy link

@navytux navytux commented Jan 10, 2020

( This proposal is alternative to #36448. It proposes to add context.Merge instead of exposing general context API for linking-up third-party contexts into parent-children tree for efficiency )

Current context package API provides primitives to derive new contexts from one parent - WithCancel, WithDeadline and WithValue. This functionality covers many practical needs, but not merging - the case where it is neccessary to derive new context from multiple parents. While it is possible to implement merge functionality in third-party library (ex., with current state of context package, such implementations are inefficient as they need to spawn extra goroutine to propagate cancellation from parents to child.

To solve the inefficiency I propose to add Merge functionality to context package. The other possibility would be to expose general mechanism to glue arbitrary third-party contexts into context tree. However since a) Merge is a well-defined concept, and b) there are (currently) no other well-known cases where third-party context would need to allocate its own done channel (see #28728; this is the case where extra goroutine for cancel propagation needs to be currently spawned), I tend to think that it makes more sense to add Merge support to context package directly instead of exposing a general mechanism for gluing arbitrary third-party contexts.

Below is description of the proposed API and rationale:

---- 8< ----

Merging contexts

Merge could be handy in situations where spawned job needs to be canceled whenever any of 2 contexts becomes done. This frequently arises with service methods that accept context as argument, and the service itself, on another control line, could be instructed to become non-operational. For example:

func (srv *Service) DoSomething(ctx context.Context) (err error) {
	defer xerr.Contextf(&err, "%s: do something", srv)

	// srv.serveCtx is context that becomes canceled when srv is
	// instructed to stop providing service.
	origCtx := ctx
	ctx, cancel := xcontext.Merge(ctx, srv.serveCtx)
	defer cancel()

	err = srv.doJob(ctx)
	if err != nil {
		if ctx.Err() != nil && origCtx.Err() == nil {
			// error due to service shutdown
			err = ErrServiceDown
		return err


func Merge

func Merge(parent1, parent2 context.Context) (context.Context, context.CancelFunc)

Merge merges 2 contexts into 1.

The result context:

  • is done when parent1 or parent2 is done, or cancel called, whichever happens first,
  • has deadline = min(parent1.Deadline, parent2.Deadline),
  • has associated values merged from parent1 and parent2, with parent1 taking precedence.

Canceling this context releases resources associated with it, so code should call cancel as soon as the operations running in this Context complete.

---- 8< ----

To do the merging of ctx and srv.serveCtx done channels current implementation has to allocate its own done channel and spawn corresponding goroutine:

context.WithCancel, when called on resulting merged context, will have to spawn its own propagation goroutine too.

For the reference here is context.Merge implementation in Pygolang that does parents - child binding via just data:

/cc @Sajmani, @rsc, @bcmills

EDIT 2020-07-01: The proposal was amended to split cancellation and values concerns: #36503 (comment).

Copy link
Contributor Author

@navytux navytux commented Feb 6, 2020

Judging by #33502 this proposal seems to have been missed. Could someone please add it to Proposals/Incoming project? Thanks.

@bcmills bcmills added this to Incoming in Proposals Feb 6, 2020
Copy link

@dweomer dweomer commented May 20, 2020

	ctx, cancel := xcontext.Merge(ctx, srv.serveCtx)

Is not the struct-held reference to a context a smell (regardless that it is long-lived)? If your server must be cancellable isn't it better practice to establish a "done" channel (and select on that + ctx in main thread) for it and write to that when the server should be done? This does not incur an extra goroutine.

Copy link
Contributor Author

@navytux navytux commented May 21, 2020

@dweomer, as I already explained in the original proposal description there are two cancellation sources: 1) the server can be requested to be shutdown by its operator, and 2) a request can be requested to be canceled by client who issued the request. This means that any request handler that is spawned to serve a request must be canceled whenever any of "1" or "2" triggers. How does "select on done + ctx in main thread" helps here? Which context should one pass into a request handler when spawning it? Or do you propose we pass both ctx and done into all handlers and add done into every select where previously only ctx was there? If it is indeed what your are proposing, I perceive Merge as a much cleaner solution, because handlers still receive only one ctx and the complexity of merging cancellation channels is not exposed to users.

Re smell: I think it is not. Go is actually using this approach by itself in database/sql, net/http (2, 3, 4, 5, 6) and os/exec. I suggest to read Go and Dogma as well.

Copy link

@seebs seebs commented May 22, 2020

I just reinvented this independently. The situation is that I have two long-lived things, a shared work queue which several things could be using, and the individual things, and it's conceptually possible to want to close the shared work queue and make a new one for the things to use... And then there's an operation where the work queue scans through one of the things to look for extra work. That operation should shut down if either the thing it's scanning shuts down, or the shared work queue in general gets shut down.

Of course, context.Merge wouldn't quite help as one of them currently exposes a chan struct{}, not a Context.

navytux added a commit to navytux/go123 that referenced this issue May 27, 2020

They are there for 6 months already without being considered for real
and so the progress is very unlikely, but still...
Copy link

@rsc rsc commented Jun 10, 2020

I think I understand the proposal.

I'm curious how often this comes up. The Merge operation is significantly more complex to explain than any existing context constructor we have. On the other hand, the example of "server has a shutdown context and each request has its own, and have to watch both" does sound pretty common. I guess I'm a little confused about why the request context wouldn't already have the server context as a parent to begin with.

I'm also wondering whether Merge should take a ...context.Context instead of hard-coding two (think io.MultiReader, io.MultiWriter).

@rsc rsc moved this from Incoming to Active in Proposals Jun 10, 2020
Copy link

@seebs seebs commented Jun 10, 2020

I do like the MultiReader/MultiWriter parallel; that seems like it's closer to the intent.

In our case, we have a disk-intensive workload that wants to be mediated, and we might have multiple network servers running independently, all of which might want to do some of that kind of work. So we have a worker that sits around waiting for requests, which come from those servers. And then we want to queue up background scans for "work that got skipped while we were too busy but we wanted to get back to it". The background scan of any given individual network server's workload is coming in parented by the network server, but now it also wants to abort if the worker decides it's closing. But the worker's not really contingent on the network server, and in some cases could be stopped or restarted without changing the network servers.

It's sort of messy, and I'm not totally convinced that this design is right. I think it may only actually matter during tests, because otherwise we wouldn't normally be running multiple network servers like this at once in a single process, or even on a single machine.

Copy link

@bcmills bcmills commented Jun 10, 2020

@seebs, if the background work is continuing after the network handler returns, it's generally not appropriate to hold on to arbitrary values from the handler's Context anyway. (It may include stale values, such as tracing or logging keys, and could end up preventing a lot of other data reachable via ctx.Value() from being garbage-collected.)

Copy link

@seebs seebs commented Jun 10, 2020

... I think I mangled my description. That's true, but it doesn't happen in this case.

Things initiated from the network requests don't keep the network request context if any of their work has to happen outside of that context. They drop something in a queue and wander off.

The only thing that has a weird hybrid context is the "background scanning", because the background scanning associated with a given network server should stop if that server wants to shut down, but it should also stop if the entire worker queue wants to shut down even when the network server is running. But the background scanning isn't triggered by network requests, it's something the network server sets up when it starts. It's just that it's contingent on both that server and the shared background queue which is independent from all the servers.

Copy link
Contributor Author

@navytux navytux commented Jun 16, 2020

@rsc, thanks for feedback.

Yes, as you say, the need for merge should be very pretty common - practically in almost all client-server cases on both client and server sides.

I guess I'm a little confused about why the request context wouldn't already have the server context as a parent to begin with.

For networked case - when client and server interoperate via some connection where messages go serialized - it is relatively easy to derive handler context from base context of server and manually merge it with context of request:

  • client serializes context values into message on the wire;
  • client sends corresponding message when client-side context is canceled;
  • server creates context for handler deriving it from base server context;
  • server applies decoded values from wire message to derived context;
  • server stores derived context cancel in data structure associated with stream through which request was received;
  • server calls handler.cancel() when receiving through the stream a message corresponding to request cancellation.

Here merging can happen manually because client request arrives to server in serialized form.
The cancellation linking for client-server branch is implemented via message passing and serve loop. The data structures used for gluing resemble what Merge would do internally.

In other cases - where requests are not serialized/deserialized - the merge is needed for real, for example:

  1. on server a handler might need to call another internal in-process service ran with its own contexts;
  2. client and server are in the same process ran with their own contexts;
  3. on client every RPC stub that is invoked with client-provided context, needs to make sure to send RPC-cancellation whenever either that user-provided context is canceled, or underlying stream is closed;
  4. etc...

Since, even though they are found in practice, "1" and "2" might be viewed as a bit artificial, lets consider "3" which happens in practice all the time:

Consider any client method for e.g. RPC call - it usually looks like this:

func (cli *Client) DoSomething(ctx context.Context, ...) {
    cli.conn.Invoke(ctx, "DoSomething", ...)

conn.Invoke needs to make sure to issue request to server under context that is canceled whenever ctx is canceled, or whenever cli.conn is closed. For e.g. gRPC cli.conn is multiplexed stream over HTTP/2 transport, and stream itself must be closed whenever transport link is closed or brought down. This is usually implemented by way of associating corresponding contexts with stream and link and canceling stream.ctx <- link.ctx on link close/down. cli.conn.Invoke(ctx,...) should thus do exactly what Merge(ctx, cli.conn.ctx) is doing.

Now, since there is no Merge, everyone is implementing this functionality by hand with either extra goroutine, or by doing something like

reqCtx, reqCancel = context.WithCancel(ctx)

, keeping registry of issued requests with their cancel in link / stream data structures, and explicitly invoking all those cancels when link / stream goes down.

Here is e.g. how gRPC implements it:

And even though such explicit gluing is possible to implement by users, people get tired of it and start to use "extra goroutine" approach at some point:

In other words the logic and complexity that Merge might be doing internally, well and for everyone, without Merge is scattered to every user and is intermixed with the rest of application-level logic.

On my side I would need the Merge in e.g. on client,

and on server where context of spawned handlers is controlled by messages from another server which can tell the first server to stop being operational (it can be as well later told by similar message from second server to restart providing operational service):

and in many other places...

I often see simplicity as complexity put under control and wrapped into simple interfaces.
From this point of view Merge is perfect candidate because 1) it is a well-defined concept, 2) it allows to offload users from spreading that complexity throughout their libraries/applications, and 3) it kind of makes a full closure for group of context operations, which was incomplete without it.

On "3" I think the following analogies are appropriate:

Without Merge context package is like

  • Git with commit and branches, but no merge;
  • Go with go and channels, but no select;
  • SSA without φ nodes,
  • ...

In other words Merge is a fundamental context operation.

Yes, Merge requires willingness from Go team to take that complexity and absorb it inside under Go API. Given that we often see reluctance to do so in other cases, I, sadly, realize that it is very unlikely to happen. On the other hand there is still a tiny bit of hope on my side, so I would be glad to be actually wrong on this...


P.S. I tend to agree about converting Merge to accept (parentv ...context.Context) instead of (parent1, parent2 context.Context).

P.P.S. merging was also discussed a bit in #30694 where @taralx wrote: "While it is possible to do this by wrapping the handler and merging the contexts, this is error-prone and requires an additional goroutine to properly merge the Done channels."

@rsc rsc changed the title proposal: context: Add Merge proposal: context: add Merge Jun 17, 2020
Copy link

@rsc rsc commented Jun 17, 2020

@Sajmani and @bcmills, any thoughts on whether we should add context.Merge as described here? (See in particular the top comment.)

Copy link

@rsc rsc commented Jun 17, 2020

/cc @neild @dsnet as well for more context opinions

Copy link

@neild neild commented Jun 17, 2020

Within Google's codebase, where the context package originated, we follow the rule that a context.Context should only be passed around via the call stack.


Don't add a Context member to a struct type; instead add a ctx parameter to each method on that type that needs to pass it along. The one exception is for methods whose signature must match an interface in the standard library or in a third party library.

This rule means that at any point in the call stack, there should be exactly one applicable Context, received as a function parameter. When following this pattern, the merge operation never makes sense.

While merging context cancellation signals is straightforward, merging context values is not. Contexts can contain trace IDs and other information; which value would we pick when merging two contexts?

I also don't see how to implement this efficiently without runtime magic, since it seems like we'd need to spawn a goroutine to wait on each parent context. Perhaps I'm missing something.

Copy link

@bcmills bcmills commented Jun 18, 2020

For values, Merge would presumably bias toward one parent context or the other. I don't see that as a big problem.

I don't think runtime magic is needed to avoid goroutines, but we would at least need some (subtle) global lock-ordering for the cancellation locks, since we could no longer rely on the cancellation graph being tree-structured. It would at least be subtle to implement and test, and might carry some run-time overhead.

Copy link

@Sajmani Sajmani commented Jun 19, 2020

Context combines two somewhat-separable concerns: cancelation (via the Deadline, Done, and Err methods) and values. The proposed Merge function combines these concerns again, defining how cancelation and values are merged. But the example use case only relies on cancelation, not values:

I would feel more comfortable with this proposal if we separated these concerns by providing two functions, one for merging two cancelation signals, another for merging two sets of values. The latter came up in a 2017 discussion on detached contexts: #19643 (comment)

For the former, we'd want something like:

ctx = context.WithCancelContext(ctx, cancelCtx)

which would arrange for ctx.Done to be closed when cancelCtx.Done is closed and ctx.Err to be set from cancelCtx.Err, if it's not set already. The returned ctx would have the earlier Deadline of ctx and cancelCtx.

We can bikeshed the name of WithCancelContext, of course. Other possibilities include WithCanceler, WithCancelFrom, CancelWhen, etc. None of these capture Deadline, too, though.

Copy link

@rsc rsc commented Jun 24, 2020

@navytux, what do you think about Sameer's suggestion to split the two operations of WithContextCancellation and WithContextValues (with better names, probably)?

Copy link
Contributor Author

@navytux navytux commented Jul 1, 2020

@Sajmani, @rsc, everyone, thanks for feedback.

First of all I apologize for the delay with replying as I'm overbusy this days and it is hard to find time to properly do. This issue was filed 7 months ago when things were very different on my side. Anyway, I quickly looked into what @Sajmani referenced in #36503 (comment), and to what other says; my reply is below:

Indeed Context combines two things in one interface: cancellation and values. Those things, however, are orthogonal. While merging cancellation is straightforward, merging values is not so - in general merging values requires merging strategy to see how to combine values from multiple sources. And in general case merging strategy is custom and application dependent.

My initial proposal uses simple merging strategy with values from parent1 taking precedence over values from parent2. It is simple merging strategy that I've came up with while trying to make Merge work universally. However the values part of my proposal, as others have noted, is indeed the weakest, as that merging strategy is not always appropriate.

Looking into what @Sajmani has said in #19643 (comment) and #19643 (comment), and with the idea to separate cancellation and values concerns, I propose to split Context interface into cancellation-part and values-part and rework the proposal as something like follows:

// CancelCtx carries deadline and cancellation signal across API boundaries.
type CancelCtx interface {
        Deadline() (deadline time.Time, ok bool)
        Done() <-chan struct{}
        Err() error

// CancelFunc activates CancelCtx telling an operation to abandon its work.
type CancelFunc func()

// Values carries set of key->value pairs across API boundaries.
type Values interface {
        Value(key interface{}) interface{}

// Context carries deadline, cancellation signal, and other values across API boundaries.
type Context interface {

// ... (unchanged)
func WithCancel   (parent Context) (ctx Context, cancel) 
func WithDeadline (parent Context, d  time.Time) (ctx Context, cancel) 
func WithTimeout  (parent Context, dt time.Duration) (ctx Context, cancel) 
func WithValue    (parent Context, key,val interface{}) Context 

// MergeCancel merges cancellation from parent and set of cancel contexts.
// It returns copy of parent with new Done channel that is closed whenever
//      - parent.Done is closed, or
//      - any of CancelCtx from cancelv is canceled, or
//      - cancel called
// whichever happens first.
// Returned context has Deadline as earlies of parent and any of cancels.
// Returned context inherits values from parent only.
func MergeCancel(parent Context, cancelv ...CancelCtx) (ctx Context, cancel CancelFunc)

// WithNewValues returns a Context with a fresh set of Values. 
// It returns a Context that satisfies Value calls using vs.Value instead of parent.Value.
// If vs is nil, the returned Context has no values. 
// Returned context inherits deadline and cancellation only from parent. 
// Note: WithNewValues can be used to extract "only-cancellation" and
// "only-values" parts of a Context via
//      ctxNoValues := WithNewValues(ctx, nil)           // only cancellation
//      ctxNoCancel := WithNewValues(Background(), ctx)  // only values
func WithNewValues(parent Context, vs Values) Context 

Values and WithNewValues essentially come from #19643. Merge is reworked to be MergeCancel and only merging cancellation signal, not values. This separates values vs cancellation concerns, is general (does not hardcode any merging strategy for values), and can be implemented without extra goroutine.

For the reference, here is how originally-proposed Merge could be implemented in terms of MergeCancel and WithNewValues:

// Merge shows how to implement Merge from
// in terms of MergeCancel and WithNewValues.
func Merge(parent1, parent2 Context) (Context, cancel) {
        ctx, cancel := MergeCancel(parent1, parent2)
        v12 := &vMerge{[]Values{parent1, parent2}}
        ctx = WithNewValues(ctx, v12)
        return ctx, cancel

// vMerge implements simple merging strategy: values from vv[i] are taking
// precedence over values from vv[j] for i>j.
type vMerge struct {
        vv []Values

func (m *vMerge) Value(key interface{}) interface{} {
        for _, v := range m.vv {
                val := v.Value(key)
                if val != nil {
                        return val
        return nil

Regarding implementation: it was already linked-to in my original message, but, as people still raise concerns on whether "avoid extra-goroutine" property is possible, and on lock ordering, here it is once again how libgolang implements cancellation merging without extra goroutine and without any complex lock ordering:

Maybe I'm missing something, and of course it will have to be adapted to MergeCancel and NewValues, but to me the implementation is relatively straightforward.


/cc @zombiezen, @jba, @ianlancetaylor, @rogpeppe for #19643

Copy link

@rsc rsc commented Jul 8, 2020

Thanks for the reply. We're probably not going to split the Context interface as a whole at this point.
Note that even the ...CancelCtx would not accept a []Context, so that would be a stumbling block for users.

The value set merge can be done entirely outside the context package without any inefficiency. And as @neild points out, it's the part that is the most problematic.

The cancellation merge needs to be inside context, at least with the current API, or else you'd have to spend a goroutine on each merged context. (And we probably don't want to expose the API that would be required to avoid that.)

So maybe we should focus only on the cancellation merge and ignore the value merge entirely.

It still doesn't seem like we've converged on the right new API to add, though.

@bradfitz points out that not just the cancellation but also the timeouts get merged, right?
(And the error that distinguishes between those two cases gets propagated?)
So it's not really only merging cancellation.

It does seem like the signature is

func SOMETHING(parent Context, cancelv ...CancelCtx) (ctx Context, cancel CancelFunc)

Or maybe the op to expose is being able to cancel one context when another becomes done, like:

// Link arranges for context x to become done when context y does.
func Link(x, y Context) 

(with better names).


It seems like we're not yet at an obviously right answer.

Copy link
Contributor Author

@navytux navytux commented Jul 15, 2020

@rsc, thanks for feedback. I think I need to clarify my previous message:

  • last time I did not proposed to include Merge - I showed only how it could be possible to implement Merge functionality in third-party place with what comes in the proposal;

  • my proposal consist only of context.MergeCancel and context.WithNewValues;

  • context.WithNewValues comes directly from #19643 (comment) and #19643 (comment) - it is exactly what @Sajmani proposed there;

  • context.WithNewValues and thinking about context.Context as being composed of two parts (cancellation + values) comes due to @Sajmani request to do so in #36503 (comment), where he says:

    "Context combines two somewhat-separable concerns: cancelation (via the Deadline, Done, and Err methods) and values
    I would feel more comfortable with this proposal if we separated these concerns by providing two functions, one for merging two cancelation signals, another for merging two sets of values..."

  • as @Sajmani explains WithNewValues cannot be efficiently implemented out of context package, that's why I brought it in to see full picture.

  • regarding merging of cancellation it is

    func MergeCancel(parent Context, cancelv ...CancelCtx) (ctx Context, cancel CancelFunc)

    the only potential drawback here is that automatic conversion of []Context to []CancelCtx does not work.

    However there is the same issue with e.g. io.MultiReader(readerv if, for example, someone has []io.ReadCloser, or []OTHERTYPE with OTHERTYPE implementing io.Reader, it won't be possible to pass that slice to io.MultiReader directly without explicit conversion:

    package main
    import "io"
    func something(rv {}
    func f2() {
            var xv []io.ReadCloser
    ./argv.go:9:11: cannot use xv (type []io.ReadCloser) as type []io.Reader in argument to something

    From this point of view, personally, I think it is ok for cancelv to be ...CancelCtx, because

    • explicit usage with direct arguments - even of Context type - without ... will work;
    • for rare cases where users will want to use ... they will have to explicitly convert to []Context, which is currently generally unavoidable as io.MultiReader example shows.

    Said that I'm ok if we change MergeCancel to accept ...Context or even just only one extra explicit argument MergeCancel(parent Context, cancelCtx Context). However I think that would be worse overall because it looses generality.

  • timeouts are indeed merged as part of cancellation, because timeouts are directly related to cancellation and are included into CancelCtx for that reason. They are not values. We cannot skip merging timeouts when merging cancellation.

    For example after

    func (srv) doSomething(ctx) {
        ctx, cancel := context.Merge(ctx, srv.serveCtx)

    I, as a user, expect

    • ctx to be cancelled in particular when srv.serveCtx is cancelled;
    • ctx to have deadline not longer than deadline for srv.serveCtx.

    if we merge only cancel signal, but not deadline, the result context can be cancelled early - due to srv.serveCtx timeout, but its deadline could be infinity, if original ctx is e.g. background. To me it is not good to report deadline=∞ when it is known in advance that the operation will be canceled due to timeout.

    That's my rationale for treating and merging deadlines together with cancellation.

    I think it coincides with @Sajmani treatment that cancellation is constituted by Deadline, Done and Err instead of only Done and Err without Deadline.

Regarding Link - I think it is better we indeed try to avoid exposing this general functionality to API. Link can create cycles and besides that it is not possible to implement Link for arbitrary third-party context because having only Context interface there is no way to cancel it even via extra goroutine or whatever. At least without introducing other extra interfaces a context must expose to be linkable. Contrary to that, MergeCancel is well-defined operation and can be implemented generally - efficiently if all arguments are native to context package, and via extra goroutine to propagate cancellation for contexts coming from third-party places.

What do you think? Does my feedback clarify anything?
It would be good to also see what @Sajmani thinks.


Copy link

@rsc rsc commented Jul 15, 2020


FWIW, @Sajmani's comment from 2017 #19643 (comment) is out of date. WithNewValues can be implemented efficiently outside the context package, after changes we made recently.

Re: MergeCancel(parent Context, cancelCtx ...Context) being "worse overall because it looses generality", what generality does it lose? No one has any values of type CancelCtx today, so there is nothing to generalize. Even if we added the CancelCtx type, wouldn't every instance be a Context anyway? Certainly the only ones we can handle efficiently would be contexts.

It does sound like we're converging on

//  MergeCancel returns a copy of parent with additional deadlines and cancellations applied
// from the list of extra contexts. The returned context's Done channel is closed
// when the returned cancel function is called or when parent's Done channel is closed,
// or when any of the extra contexts' Done channels are closed.
// Canceling this context releases resources associated with it, so code should
// call cancel as soon as the operations running in this Context complete.
func MergeCancel(parent Context, extra ...Context) (ctx Context, cancel CancelFunc)

Does anyone object to those semantics? Maybe it should be MergeDone? Some better name?

Copy link

@DmitriyMV DmitriyMV commented Jul 17, 2020


WithNewValues can be implemented efficiently outside the context package, after changes we made recently.

Can you elaborate on those changes?

Copy link

@bcmills bcmills commented Jul 17, 2020

I think if we want to use only a subset of the Context methods, we should require only the necessary subset of those methods, not the full Context interface. Otherwise we still have the same awkward asymmetry from the straight-up Merge with Value fallback — it's just that that asymmetry happens after the first argument instead of uniformly across all arguments. (Eliminating the Value method also doesn't provide much benefit in terms of implementation complexity: it addresses the straightforward Value-chaining problem, but not the more difficult lock-ordering problem.)

The issue of assignability from []Context could be addressed using the current generics draft instead, although I'm not sure whether that's better or worse:

type DoneCtx interface {
	Done() <-struct{}
	Err() error

func MergeDone[type DC DoneCtx](parent Context, extra ...DC) (ctx Context, cancel CancelFunc)
Copy link
Contributor Author

@navytux navytux commented Jul 22, 2020

@rsc, everyone, thanks for feedback.

FWIW, @Sajmani's comment from 2017 #19643 (comment) is out of date. WithNewValues can be implemented efficiently outside the context package, after changes we made recently.

@rsc, here you probably mean commit 0ad3686 (CL196521), which implemented done propagation through foreign contexts via introducing dedicated cancelCtxKey value type:


Lines 288 to 302 in 11f92e9

// &cancelCtxKey is the key that a cancelCtx returns itself for.
var cancelCtxKey int
// parentCancelCtx returns the underlying *cancelCtx for parent.
// It does this by looking up parent.Value(&cancelCtxKey) to find
// the innermost enclosing *cancelCtx and then checking whether
// parent.Done() matches that *cancelCtx. (If not, the *cancelCtx
// has been wrapped in a custom implementation providing a
// different done channel, in which case we should not bypass it.)
func parentCancelCtx(parent Context) (*cancelCtx, bool) {
done := parent.Done()
if done == closedchan || done == nil {
return nil, false
p, ok := parent.Value(&cancelCtxKey).(*cancelCtx)


Lines 353 to 358 in 11f92e9

func (c *cancelCtx) Value(key interface{}) interface{} {
if key == &cancelCtxKey {
return c
return c.Context.Value(key)


Lines 264 to 285 in 11f92e9

if p, ok := parentCancelCtx(parent); ok {
if p.err != nil {
// parent has already been canceled
child.cancel(false, p.err)
} else {
if p.children == nil {
p.children = make(map[canceler]struct{})
p.children[child] = struct{}{}
} else {
atomic.AddInt32(&goroutines, +1)
go func() {
select {
case <-parent.Done():
child.cancel(false, parent.Err())
case <-child.Done():

In other words, in today's implementation, for cancellation to work efficiently, cancelCtxKey value has to be present in values, or else, the next time e.g. WithCancel is called, it will have to spawn a goroutine to
propagate cancellation.

If we imagine WithNewValues be implemented outside of context package, how that third-party place would a) care to preserve cancelCtxKey when switching values to new set, and b) care not to inject cancelCtxKey from the new set of values not to corrupt cancellation? All that given that cancelCtxKey is private to context package.

Maybe I'm missing something, but to me this tells that even today, WithNewValues cannot be efficiently and even correctly implemented outside of context package.

Regarding cancellation: it is good we start to converge to common understanding, thanks.

For naming I think the name MergeCancel is a good one. Like we discussed above, cancellation consists not only of done channel - it also has deadline and associated error. And this name aligns well with usage of word "cancel" in other places in the package, for example with package overview, cancellation description in Context interface and with WithCancel. I certainly see MergeDone as a less good alternative.

Regarding ...Context vs ...CancelCtx in MergeCancel argument: the problem here is that once we establish signature of MergeCancel, due to backward compatibility we will likely not be able to change it later if/when we decide to introduce CancelCtx type. In other words if whole Context interface is not reduced to only cancellation part (CancelCtx) and only values part (Values), people will still have to propagate and use whole Context, even if inside a function only one part is used. This can prevent cleaner API and mislead programmers to think that whenever context is passed in, corresponding operation can be cancelled and errored out, or it can use values associated with the context where in fact it must not. This concides with what @bcmills says in #36503 (comment), and is also exactly the same reason as @Sajmani was pointing out in #19643 (comment):

In an earlier comment, I proposed defining this interface:

type Values interface {
	Value(key interface{}) interface{}

For use with a context.WithNewValues function.

It occurred to me that the ability to separate a context's values from the rest of the context (notably its deadline and cancelation) is also useful in logging and tracing. Some logging APIs want values from a context, but it is somewhat confusing to pass a Context to a Log call, since this suggests that the Log call might be canceled. With a Values interface, we can define:

func Log(vals context.Values, ...)

Which makes it clear that the logger is only consuming the values from the Context.

I hope it clarifies a bit what kind of generality we can loose if we establish cancelv ...Context instead of cancelv ...CancelCtx now.

With all that feedback

I would still like to see and appreciate feedback from @Sajmani.

It was him to raise this "separate cancellation and values concern" in #36503 (comment), and the way I've reworked my proposal in #36503 (comment) was directly due to that @Sajmani's request.

I feel we are likely to miss the bigger picture without getting feedback from Sameer, that's why I'm asking for it.

Thanks beforehand,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
9 participants
You can’t perform that action at this time.