Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: define how sync/atomic interacts with memory model #5045

Open
rsc opened this issue Mar 13, 2013 · 71 comments
Open

doc: define how sync/atomic interacts with memory model #5045

rsc opened this issue Mar 13, 2013 · 71 comments
Assignees
Labels
Projects
Milestone

Comments

@rsc
Copy link
Contributor

@rsc rsc commented Mar 13, 2013

Neither golang.org/ref/mem nor golang.org/pkg/sync/atomic say anything about what
guarantees are made by the atomic operations wrt the memory model. They should be as
weak as possible, of course, but right now they are non-existence, which is a little too
weak.

We might say, for example, that an atomic.Store writing a value to a memory location
happens before an atomic.Load that reads that value from the memory location. Is that
something we want to say? If not, what do we want to say?

What about Add?

What about CompareAndSwap?
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Mar 14, 2013

Comment 1:

Why do you want them to be as weak as possible? Weak atomics are difficult to program.
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Mar 14, 2013

Comment 2:

>We might say, for example, that an atomic.Store writing a value to a memory location
happens before an atomic.Load that reads that value from the memory location. Is that
something we want to say? If not, what do we want to say?
Yes, we want to say that.
Regarding Add/CAS, it should be formulated in in more general terms, along the lines of:
at atomic operation that stores a value (incl ADD/CAS) happens before atomic operation
that reads that value from the memory location (incl ADD/CAS).
However, this does not cover the Dekker synchronization pattern:
X = Y = 0
// goroutine 1
X = 1  // atomic
r1 = Y  // atomic
// goroutine 2
Y = 1  // atomic
r2 = X  // atomic
The rule above allows r1 == r2 == 0, however such outcome is impossible under sequential
consistency (total order).
Dekker pattern is used in tricky mutual exclusion algorithms and in safe object
reclamation schemes. On one hand it's used very infrequently, but on the other hand
there will be no way to implement it at all. That's why I am asking about "as weak as
possible".
@rsc
Copy link
Contributor Author

@rsc rsc commented Mar 14, 2013

Comment 3:

Let me amend my earlier statement: I want them to be as weak as possible
but still useful, like the current memory model is very weak compared to
what other languages have to say about the topic, but it's still useful.
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Mar 15, 2013

Comment 4:

Current chan semantics are complete wrt the problem the solve.
Atomics won't be complete if they provide weak synchronization guarantees, i.e. some
problems will be unsolvable.
Moreover, sequential consistency is the simplest to specify (C/C++ complexity comes from
exactly weak atomics -- possible reorderings, data dependencies, etc).
Moreover, sequential consistency is easy to understand and explain (remember the recent
discussion and confusion about chan-based semaphores, and the Effective Go example was
incorrect for several years).
@rsc
Copy link
Contributor Author

@rsc rsc commented Mar 15, 2013

Comment 5:

I think we are using different meanings for the word weak. You have a very
precise meaning in mind. I do not. I just mean "let's not guarantee more
than we need to guarantee to make things useful for people." That's a
general goal, not a concrete proposal.
Dmitriy, if you have time, could you please make a proposal about what you
think the atomics should guarantee? A few sentences here in the issue is
fine.
Thanks.
Russ
@robpike
Copy link
Contributor

@robpike robpike commented May 18, 2013

Comment 6:

Labels changed: added go1.2maybe, removed go1.1maybe.

@dvyukov
Copy link
Member

@dvyukov dvyukov commented Jul 30, 2013

Comment 7:

Please clarify more on your meaning of "weak".
The problem with atomic operations is that they are lower level that chans. There are
lots of practically useful things that are possible to build using atomics.
So what do you want to specify:
1. Semantics for majority of simpler use cases (say 95%), and leave the remaining cases
unspecified for now.
or 2. Semantics for all practically useful cases.
I would vote for 2, because sooner or later somebody will ask about the remaining 5% and
answer "you can rely on X guarantee, but we do not want to officially guarantee it" does
not look good. (btw we use that remaining 5% in e.g. WaitGroup).
And 2 is extremely strong, it's not weak in any possible sense of this word.
@rsc
Copy link
Contributor Author

@rsc rsc commented Jul 31, 2013

Comment 8:

I mean 1, especially if the semantics can be kept to a minumum.
No, that is not the answer. The answer is "if it is not in the memory model
you must not depend on it." If that's still true once we have defined the
semantics, we should rewrite WaitGroup.
I asked you to write a few sentences sketching the semantics you want, but
you haven't done that.
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 8, 2013

Comment 9:

The minimal semantics must be along the lines of:
"If an atomic operation A observes a side effect of an atomic operation B, then A
happens before B".
That's basically it.
Note that not only Load can observe the side effect. Return value from Add and
CompareAndSwap also allows in infer which side effect we observe. Read-modify-write
operations (Add, CAS) first observe side effect of a previous operation on the same var,
and then produce a new side effect. I imply that there is a total order Mv over all
atomic operations that mutate atomic variable V.
Such definition supports use cases like producer-consumer, object publication, etc.
However, such definition does not support trickier synchronization patterns. And frankly
I would not want to rewrite any existing synchronization primitives due to this. In
runtime we a dozen of such "unsupported" cases, I understand that that's different
atomics, but I just want to show that such use cases exist.
Semantics that cover all synchronization patterns would be along the lines of:
"There is a total order S over all atomic operations (that is consistent with
modification orders M of individual atomic variables, happen-before relations,
bla-bla-bla). An atomic operation A happens after all atomic operations that precede A
in S".
The trick here is that you usually can not infer S (w/o any pre-existing happens-before
relations). The only (?) cases where you can infer a useful information from S are:
1. When atomic operations A and B operate on the same var, and this makes this
definition a superset of the first definition (S is consistent with all Mv).
2. When it's enough to know that either A happens-before B or vise versa (this is true
for any pair of atomic operations due to total order S).
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 8, 2013

Comment 10:

>"If an atomic operation A observes a side effect of an atomic operation B, then A
happens before B"
s/A happens before B/B happens before A/
@rsc
Copy link
Contributor Author

@rsc rsc commented Aug 13, 2013

Comment 11:

How about this:
"Package sync/atomic provides access to individual atomic operations. These atomic
operations never happen simultaneously. That is, for any two atomic operations e1 and
e2, either e1 happens before e2 or e2 happens before e1, even if e1 and e2 operate on
different memory locations."
Is that a good idea? Is it too strong? Is it more than we need, less than we need? Is it
going to be too hard to guarantee on systems like Alpha? I don't know. But at least it
is simple and I understand what it is saying. That's different than understanding all
the implications.
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 14, 2013

Comment 12:

As per offline discussion, your "either e1 happens before e2 or e2 happens before e1"
definition looks good if data races are prohibited. Otherwise, racy accesses allow to
infer weird relations, e.g. that a Load happens-before a Store:
// thread 1
x = 1
atomic.Load(&whatever)
y = 1
// thread 2
if y == 1 {
  atomic.Store(&whatever2)
  println(x) // must print 1
}
This means that Load must execute release memory barrier and store -- acquire memory
barrier. Most likely this will make implementations of atomic operations costlier.
@rsc
Copy link
Contributor Author

@rsc rsc commented Aug 14, 2013

Comment 13:

Okay, maybe that's a bad definition then (I was just rephrasing yours, I
believe). It sounds like it is too strong. Are loads and stores the only
problem. Is this any better?
"""
Package sync/atomic provides access to individual atomic operations. For
any two atomic operations e1 and e2 operating on the same address:
  - if e1 is not a Load, e2 is a Load, and e2 observes the effect of e1, e1
happens before e2.
  - if e1 is a Store, e2 is not a Store, and e2 observes the effect of e1,
e1 happens before e2.
  - if neither operation is a Load or Store, either e1 happens before e2 or
e2 happens before e1.
"""
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 15, 2013

Comment 14:

Why don't you want to give up on data races?
We probably can ensure atomicity of word accesses in gc w/o sacrificing important
optimizations. But:
1. We can not ensure visibility guarantess, e.g. if a var is registrized in a loop, and
at this point racy accesses become almost useless.
2. Races are definitely not safe for maps and slices.
3. Most likely we can not ensure any guarantees for races in gccgo (not sure what gcc
java does here).
4. I do not see any benefits of allowing data races. Currently there is runtime cost for
calling atomic.Load instead of doing plain load. But this must be addresses by providing
better atomic operations with compiler support (if that becomes the bottleneck).
Allowing data races instead to solve this looks completely wrong.
If we prohibit data races, it would make reasoning about atomic operations much much
simpler.
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 15, 2013

Comment 15:

There are 2 litmus tests for atomic operations:
1.
// goroutine 1
data = 42
atomic.Store(&ready, 1)
// goroutine 2
if atomic.Load(&ready) {
  if data != 42 {
    panic("broken")
  }
}
2.
// goroutine 1
atomic.Store(&X, 1)
r1 = atomic.Load(&Y)
// goroutine 2
atomic.Store(&Y, 1)
r2 = atomic.Load(&X)
// afterwards
if r1 == 0 && r2 == 0 {
  panic("broken")
}
As far as I see you definition does not work for 2.
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 15, 2013

Comment 16:

For 2 to work, atomic operations (including loads and stores) must form total order.
Probably the following 2 clause definition can do:
(1) If an atomic operation A observes an effect of an atomic operation B, then B happens
before A.
(2) All atomic operations form a total order that is consistent with happens-before
relations, modification orders of individual atomic variables and intra-goroutine order
of operations.
(2) implies that values returned by atomic operations and their side effects are
dictated by the total order. I am not sure whether it's obvious or not.
Note that (2) does not introduce new happens-before relations. Even if you somehow infer
that A precedes B in total order (e.g. by using racy memory accesses), this gives you
nothing.
@rsc
Copy link
Contributor Author

@rsc rsc commented Aug 15, 2013

Comment 17:

You wrote "why don't you want to give up on data races?". That's not what I
am trying to do. I am trying to avoid making atomic.Load and atomic.Store
unnecessarily expensive.
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 15, 2013

Comment 18:

It is more difficult to do in presence of data races.
W/o data races the following looks OK:
"Package sync/atomic provides access to individual atomic operations. These atomic
operations never happen simultaneously. That is, for any two atomic operations e1 and
e2, either e1 happens before e2 or e2 happens before e1, even if e1 and e2 operate on
different memory locations."
@rsc
Copy link
Contributor Author

@rsc rsc commented Aug 15, 2013

Comment 19:

Okay, then define how to prohibit data races.
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 15, 2013

Comment 20:

It's simple. We need to replace:
--------------------------
To guarantee that a read r of a variable v observes a particular write w to v, ensure
that w is the only write r is allowed to observe. That is, r is guaranteed to observe w
if both of the following hold:
w happens before r.
Any other write to the shared variable v either happens before w or after r.
This pair of conditions is stronger than the first pair; it requires that there are no
other writes happening concurrently with w or r.
Within a single goroutine, there is no concurrency, so the two definitions are
equivalent: a read r observes the value written by the most recent write w to v. When
multiple goroutines access a shared variable v, they must use synchronization events to
establish happens-before conditions that ensure reads observe the desired writes.
The initialization of variable v with the zero value for v's type behaves as a write in
the memory model.
Reads and writes of values larger than a single machine word behave as multiple
machine-word-sized operations in an unspecified order.
--------------------------
with:
--------------------------
If there is more than one such w, the behavior is undefined.
The initialization of variable v with the zero value for v's type behaves as a write in
the memory model.
--------------------------
@robpike
Copy link
Contributor

@robpike robpike commented Aug 20, 2013

Comment 21:

Is this converging?
@dvyukov
Copy link
Member

@dvyukov dvyukov commented Aug 20, 2013

Comment 22:

Difficult to say. I would not hurry with this. Moving to 1.3.

Labels changed: added go1.3, removed go1.2maybe.

@robpike
Copy link
Contributor

@robpike robpike commented Aug 20, 2013

Comment 23:

Labels changed: removed go1.3.

@rsc
Copy link
Contributor Author

@rsc rsc commented Nov 27, 2013

Comment 24:

Labels changed: added go1.3maybe.

@rsc
Copy link
Contributor Author

@rsc rsc commented Dec 4, 2013

Comment 25:

Labels changed: added release-none, removed go1.3maybe.

@rsc
Copy link
Contributor Author

@rsc rsc commented Dec 4, 2013

Comment 26:

Labels changed: added repo-main.

@rsc rsc added accepted labels Dec 4, 2013
@fmstephe
Copy link

@fmstephe fmstephe commented Jan 31, 2019

At two recent go conferences I have heard very high quality talks on low level concurrency in Go.

One was about a specific lock free algorithm used by prometheus. The other looking into how Go's race detector works. Both of these talks were of a very high quality and the audience showed a strong interest in the topics presented.

This demonstrates that there is an eager audience for documenting the semantics of the atomic package.

It also clear that while low-level atomics are niche and difficult to use they are already being used in production. The prometheus library is used extensively in a lot of Go systems.

If I do a search for source code importing atomic in just the packages I have on my work laptop I get 60 hits, only 3 of those are in code we authored ourselves (and we don't use prometheus).

I only write this to provide anecdotal data which suggests that documenting the semantics of the atomic package is probably very valuable. I'm not sure if that's a helpful comment or not.

@nhooyr
Copy link
Contributor

@nhooyr nhooyr commented Mar 2, 2019

specific lock free algorithm used by prometheus

Would be interesting in watching this talk, do you know if there is a video anywhere?

@folays
Copy link

@folays folays commented Mar 27, 2019

Please take this statement with some grain of salt, since my knowledge is more theory than practice.

I think that the lock-free algorithm used in the Prometheus talk linked by @nhooyr may exhibits undefined behaviour (a data race) on non-x86 hardware, specifically ARM with multiple processors and goroutines.

https://en.wikipedia.org/wiki/Memory_ordering specifies that x86 (but not ARM) prevents reordering of loads/stores after atomic operation (sync/atomic Load/Store/Inc/etc...)

Linux seems to even provides semantics to issue atomic_*_{acquire, release}() ops : https://www.kernel.org/doc/Documentation/atomic_t.txt

I wouldn't surprised if in Linux the rationale was to provide users of atomic_* operations to not issue an (useless) memory barrier after atomic ops (useless only on platform where atomic op are already serializing)

Golang source code seems to not explicitly issue memory barrier instructions on ARM after atomic-Xadd, but my asm knowledge for ARM is zero. On x86, Golang seems to only issue "lock"-prefixed xaddl (without barriers, but locked instructions should already be synchronising on x86).

If there Is confirmation that ARM can reorder atomic with load/stores, and that Golang does not explicitly add memory barrier instructions on atomic ops, maybe the histogram.go file from Prometheus is indeed subject to data race on ARM.

This would be a hint that in the Go language are missing either

  1. user-issuable memory barrier
  2. atomic_*_{relaxed, acquire,release} instructions ?

Maybe those two missing from the language is for the better, since missuses can cause sporadic data race.

In all cases, maybe the Go documentation should explicit if atomic_*() ops are either :

  1. undefined behaviour when used trans-goroutines, on the specific matter of happens-before relationships, i.e. "it depends on the platform" (x86 will work ; ARM will bug)
  2. defined behaviour : "we explicitly issue memory barrier instructions in your back" (heavy impact ?)
  3. defined behaviour if used with acquire/releases semantics (missing from Go)

On the specific subject of user-issuable memory barrier, I guess that for any specific goroutine, a lock+unlock (successive) of an useless atomic.Mutex, only accessible from the specific goroutine, uncontended, and alone on it's cache-line could maybe be a hacky way of issuing a full memory barrier in Go.

On another hand, I understand that the rationale behind Go is to not try to synchronize your coroutines yourself, and instead use sync and sync/atomic which may or may not define the happens-before relationship.

For the "histogram.go" files, I did not benchmarked alternatives, but I'm a little sceptical on the rationale behind the "we do not want a channel to synchronize tons of observations from multiple goroutines".
To me it seems that if multiples coroutines have tons of observations, which they atomic_add() themselves (the proposed solution of the talk speaker), it would incur some heavy cache-line bouncing.
I guess that atomic_*() instructions, even wait-free, are not cheap if your CPU core is not already an owner of the cache-line and if there is heavy contention.
I guess that only one goroutine in charge of those atomic_inc() could benefit from an uncontested cache-line on the histogram, and could receive all observations from one channel (or even multiple channels to prevent contention -if it can happens- on the unique channel)

For the part I call "how to get a snapshot of the histogram to Write() it elsewhere, without blocking all observations" I guess that the speaker of the talk could keep his technique of having two versions of the same data structure (a hot and a cold one), and synchronizing the swap (upon Write-to-elshewhere) and access (upon obervations) with a sync.RWMutex.

  • The Write-to-elsewhere would Lock()+swap+Unlock()
  • The observations would RLock()+atomic.Inc()+Unlock()
    And the atomic.Inc() could be downgraded to downgraded to a non-atomic one if there was only one goroutine.

On a personnal preference, I would like to have access to low-level semantics from Go (atomic operations, relaxed or not, and barriers)
It would gives options to try and benchmark code using them compared to order-synchronisation primitives provided by Golang (somewhat only mutex + channels).

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Mar 27, 2019

@folays The purpose of this issue is to document exactly what the Go functions in the sync/atomic package do.

Golang source code seems to not explicitly issue memory barrier instructions on ARM after atomic-Xadd, but my asm knowledge for ARM is zero.

The implementation of Xadd on ARM can be found at https://golang.org/src/runtime/internal/atomic/atomic_arm.go#L41 . As you can see, it relies on Cas. Cas is implemented either by the Linux kernel, or by https://golang.org/src/runtime/internal/atomic/asm_arm.s#L7 . In both cases the appropriate DMB (data memory barrier) instruction is used.

On a personnal preference, I would like to have access to low-level semantics from Go (atomic operations, relaxed or not, and barriers)

That is a fine thing to discuss, but that discussion is not appropriate for this issue. Please use the golang-nuts group for that. Thanks.

@zhangfannie
Copy link
Contributor

@zhangfannie zhangfannie commented Mar 28, 2019

I am trying to do some optimizations using ARMv8.1 instructions but face the difficulty to choose right instructions due to missing memory ordering definition.

There might be some other issues caused by missing memory ordering definition. Potentially, it can cause misalignment between Go application developers and Go toolchain developers. And the behavior may be not portable among different architectures. For example:

// goroutine 1
atomic.Store(&X, 1)
r1 = Y
// goroutine 2
atomic.Store(&Y, 1)
r2 = X
// afterwards
if r1 == 0 && r2 == 0 {
  panic("broken")
}

Please refer to the runnable test code. https://github.com/zhangfannie/go/blob/testatomic/test.go
Above code won’t panic on x86, but may panic on arm64. Without an explicit memory ordering definition, it is hard to tell whether the application code is wrong, Golang x86 is wrong or Golang arm64 is wrong.
Can we put some explicit definition in the document? So that people can have unified understanding of the memory order. And it will be helpful for us to select the right instructions when implementing the Golang backend.
If we think current Golang implementation is correct and satisfies most use scenarios, it might be reasonable that we just refer to the most relaxed behavior among different backends for each operations. So that we can have a memory order definition without breaking existing applications.

@tv42
Copy link

@tv42 tv42 commented Mar 28, 2019

@zhangfannie You have a race with f1 reading Y and vice versa; the code is wrong on all platforms, even if some platform might not manage to trigger the race.

@robaho
Copy link

@robaho robaho commented Mar 28, 2019

r1 and r2 need to be read by atomic.load(). But with a more explicit memory model that might not be needed - depends on if the compiler understands the semantics of the atomic methods.

@zhangfannie
Copy link
Contributor

@zhangfannie zhangfannie commented Mar 29, 2019

@tv42 @robaho @rsc The data race in the example code id intended. Actually, this is a modified version of Deller pattern from #5045 (comment). Here what I want to say is, we do not know whether this code is wrong or not without a clear order definition.

The code is wrong if atomic.Store follows store release semantic and atomic.Load follows load acquire semantic. The load should be changed to atomic.Load.
The code can be correct if the memory order definition requires full barriers inserted before and after atomic.Store.

Having a memory model defined can always be better than guessing the behavior of the toolchain.

@robaho
Copy link

@robaho robaho commented Mar 29, 2019

Actually, since the program must be sequentially consistent in the absence of threads, the program should never panic.

@tv42
Copy link

@tv42 tv42 commented Mar 29, 2019

@zhangfannie The only happens-before relationship between the last iteration r1[i] = Y[i] and main is through wg.Wait(). There are no atomic operations after that last (non-atomic) assignment. There is no happens-before relationship at all between the last r1[i] = Y[i] and any part of f2. Hence, f2 reading the last r1[i] is a race. This follows from current-day contents of https://golang.org/ref/mem . The comment you tried to link to uses atomic loads; your code does not.

@robaho
Copy link

@robaho robaho commented Mar 29, 2019

Even with atomic loads there is a data race, in that the only thing you can reason is that the values will never be 0 (unless they wrap around)

@gopherbot
Copy link

@gopherbot gopherbot commented Jul 11, 2019

Change https://golang.org/cl/185737 mentions this issue: [RFC]sync/atomic: specify the memory order guarantee provided by Load/Store

@gopherbot
Copy link

@gopherbot gopherbot commented Aug 8, 2019

Change https://golang.org/cl/189417 mentions this issue: sync/atomic define sync/atomic memory models

@eloff
Copy link

@eloff eloff commented Nov 16, 2019

Can we just add "Go's atomics guarantee sequential consistency among the atomic variables (behave like C/C++'s seqconst atomics). You shouldn't mix atomic and non-atomic accesses for a given memory word, unless some other full memory barrier, like a Mutex, guarantees exclusive access. You shouldn't mix atomics of different memory sizes for the same address."

I think it's bad that this trivial issue has been open for 6 years. At this point there's no way to specify anything more relaxed, even if that were desirable, without breaking code in the wild in hard to detect ways.

I'll sign the contributors agreement and figure out how to issue the pull request / change request for it, if somebody on the Go team will sign off on the final wording and commit to reviewing/merging it.

@robaho
Copy link

@robaho robaho commented Nov 16, 2019

@zhiqiangxu
Copy link
Contributor

@zhiqiangxu zhiqiangxu commented Dec 14, 2019

I agree with @eloff , we should let all users know that all exposed functions in sync/atomic guarantee sequential consistency, not only known in the go team(eg, #32428 (comment) and https://stackoverflow.com/a/58892365/3382012), unless one day we decide to expose atomic.LoadAcq/atomic.StoreRel or alike , by that time we can add additional document for these functions though. It has been pending for 6 years, time to make a change:)

@eloff
Copy link

@eloff eloff commented Dec 22, 2019

@robaho I'm referring to the C++ atomics happens-before ordering when talking about consistency. The docs can either point one to the C++ docs for the happens-before wording, or copy it.

Since there's no movement on this, I'm going to sign the contrib agreement and submit a pull-request.

@bcmills
Copy link
Member

@bcmills bcmills commented May 5, 2020

Here's a neat question that needs to be resolved: are programs allowed to use 32-bit atomic ops concurrently with 64-bit atomic ops on the same memory?

For example, is this program racy?

	var x uint64
	xa := (*[2]uint32)(unsafe.Pointer(&x))
	xl := &xa[0]
	xh := &xa[1]

	done := make(chan struct{})
	go func() {
		atomic.StoreUint64(&x, 0xbadc0ffee)
		close(done)
	}()

	x0 := atomic.LoadUint32(xl)
	x1 := atomic.LoadUint32(xh)

	<-done

My instinct is that this sort of size-mixing must not be allowed, because the implementations for 32-bit and 64-bit atomic ops may differ on some 32-bit hardware. However, the race detector does not currently flag it.

@yangwenmai
Copy link

@yangwenmai yangwenmai commented Jul 10, 2020

specific lock free algorithm used by prometheus

Would be interesting in watching this talk, do you know if there is a video anywhere?

@nhooyr
GopherCon UK 2019: Björn Rabenstein - Lock-free Observations for Prometheus Histograms

https://youtu.be/VmrEG-3bWyM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.