New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: define how sync/atomic interacts with memory model #5045

Open
rsc opened this Issue Mar 13, 2013 · 51 comments

Comments

Projects
None yet
@rsc
Contributor

rsc commented Mar 13, 2013

Neither golang.org/ref/mem nor golang.org/pkg/sync/atomic say anything about what
guarantees are made by the atomic operations wrt the memory model. They should be as
weak as possible, of course, but right now they are non-existence, which is a little too
weak.

We might say, for example, that an atomic.Store writing a value to a memory location
happens before an atomic.Load that reads that value from the memory location. Is that
something we want to say? If not, what do we want to say?

What about Add?

What about CompareAndSwap?
@dvyukov

This comment has been minimized.

Member

dvyukov commented Mar 14, 2013

Comment 1:

Why do you want them to be as weak as possible? Weak atomics are difficult to program.
@dvyukov

This comment has been minimized.

Member

dvyukov commented Mar 14, 2013

Comment 2:

>We might say, for example, that an atomic.Store writing a value to a memory location
happens before an atomic.Load that reads that value from the memory location. Is that
something we want to say? If not, what do we want to say?
Yes, we want to say that.
Regarding Add/CAS, it should be formulated in in more general terms, along the lines of:
at atomic operation that stores a value (incl ADD/CAS) happens before atomic operation
that reads that value from the memory location (incl ADD/CAS).
However, this does not cover the Dekker synchronization pattern:
X = Y = 0
// goroutine 1
X = 1  // atomic
r1 = Y  // atomic
// goroutine 2
Y = 1  // atomic
r2 = X  // atomic
The rule above allows r1 == r2 == 0, however such outcome is impossible under sequential
consistency (total order).
Dekker pattern is used in tricky mutual exclusion algorithms and in safe object
reclamation schemes. On one hand it's used very infrequently, but on the other hand
there will be no way to implement it at all. That's why I am asking about "as weak as
possible".
@rsc

This comment has been minimized.

Contributor

rsc commented Mar 14, 2013

Comment 3:

Let me amend my earlier statement: I want them to be as weak as possible
but still useful, like the current memory model is very weak compared to
what other languages have to say about the topic, but it's still useful.
@dvyukov

This comment has been minimized.

Member

dvyukov commented Mar 15, 2013

Comment 4:

Current chan semantics are complete wrt the problem the solve.
Atomics won't be complete if they provide weak synchronization guarantees, i.e. some
problems will be unsolvable.
Moreover, sequential consistency is the simplest to specify (C/C++ complexity comes from
exactly weak atomics -- possible reorderings, data dependencies, etc).
Moreover, sequential consistency is easy to understand and explain (remember the recent
discussion and confusion about chan-based semaphores, and the Effective Go example was
incorrect for several years).
@rsc

This comment has been minimized.

Contributor

rsc commented Mar 15, 2013

Comment 5:

I think we are using different meanings for the word weak. You have a very
precise meaning in mind. I do not. I just mean "let's not guarantee more
than we need to guarantee to make things useful for people." That's a
general goal, not a concrete proposal.
Dmitriy, if you have time, could you please make a proposal about what you
think the atomics should guarantee? A few sentences here in the issue is
fine.
Thanks.
Russ
@robpike

This comment has been minimized.

Contributor

robpike commented May 18, 2013

Comment 6:

Labels changed: added go1.2maybe, removed go1.1maybe.

@dvyukov

This comment has been minimized.

Member

dvyukov commented Jul 30, 2013

Comment 7:

Please clarify more on your meaning of "weak".
The problem with atomic operations is that they are lower level that chans. There are
lots of practically useful things that are possible to build using atomics.
So what do you want to specify:
1. Semantics for majority of simpler use cases (say 95%), and leave the remaining cases
unspecified for now.
or 2. Semantics for all practically useful cases.
I would vote for 2, because sooner or later somebody will ask about the remaining 5% and
answer "you can rely on X guarantee, but we do not want to officially guarantee it" does
not look good. (btw we use that remaining 5% in e.g. WaitGroup).
And 2 is extremely strong, it's not weak in any possible sense of this word.
@rsc

This comment has been minimized.

Contributor

rsc commented Jul 31, 2013

Comment 8:

I mean 1, especially if the semantics can be kept to a minumum.
No, that is not the answer. The answer is "if it is not in the memory model
you must not depend on it." If that's still true once we have defined the
semantics, we should rewrite WaitGroup.
I asked you to write a few sentences sketching the semantics you want, but
you haven't done that.
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 8, 2013

Comment 9:

The minimal semantics must be along the lines of:
"If an atomic operation A observes a side effect of an atomic operation B, then A
happens before B".
That's basically it.
Note that not only Load can observe the side effect. Return value from Add and
CompareAndSwap also allows in infer which side effect we observe. Read-modify-write
operations (Add, CAS) first observe side effect of a previous operation on the same var,
and then produce a new side effect. I imply that there is a total order Mv over all
atomic operations that mutate atomic variable V.
Such definition supports use cases like producer-consumer, object publication, etc.
However, such definition does not support trickier synchronization patterns. And frankly
I would not want to rewrite any existing synchronization primitives due to this. In
runtime we a dozen of such "unsupported" cases, I understand that that's different
atomics, but I just want to show that such use cases exist.
Semantics that cover all synchronization patterns would be along the lines of:
"There is a total order S over all atomic operations (that is consistent with
modification orders M of individual atomic variables, happen-before relations,
bla-bla-bla). An atomic operation A happens after all atomic operations that precede A
in S".
The trick here is that you usually can not infer S (w/o any pre-existing happens-before
relations). The only (?) cases where you can infer a useful information from S are:
1. When atomic operations A and B operate on the same var, and this makes this
definition a superset of the first definition (S is consistent with all Mv).
2. When it's enough to know that either A happens-before B or vise versa (this is true
for any pair of atomic operations due to total order S).
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 8, 2013

Comment 10:

>"If an atomic operation A observes a side effect of an atomic operation B, then A
happens before B"
s/A happens before B/B happens before A/
@rsc

This comment has been minimized.

Contributor

rsc commented Aug 13, 2013

Comment 11:

How about this:
"Package sync/atomic provides access to individual atomic operations. These atomic
operations never happen simultaneously. That is, for any two atomic operations e1 and
e2, either e1 happens before e2 or e2 happens before e1, even if e1 and e2 operate on
different memory locations."
Is that a good idea? Is it too strong? Is it more than we need, less than we need? Is it
going to be too hard to guarantee on systems like Alpha? I don't know. But at least it
is simple and I understand what it is saying. That's different than understanding all
the implications.
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 14, 2013

Comment 12:

As per offline discussion, your "either e1 happens before e2 or e2 happens before e1"
definition looks good if data races are prohibited. Otherwise, racy accesses allow to
infer weird relations, e.g. that a Load happens-before a Store:
// thread 1
x = 1
atomic.Load(&whatever)
y = 1
// thread 2
if y == 1 {
  atomic.Store(&whatever2)
  println(x) // must print 1
}
This means that Load must execute release memory barrier and store -- acquire memory
barrier. Most likely this will make implementations of atomic operations costlier.
@rsc

This comment has been minimized.

Contributor

rsc commented Aug 14, 2013

Comment 13:

Okay, maybe that's a bad definition then (I was just rephrasing yours, I
believe). It sounds like it is too strong. Are loads and stores the only
problem. Is this any better?
"""
Package sync/atomic provides access to individual atomic operations. For
any two atomic operations e1 and e2 operating on the same address:
  - if e1 is not a Load, e2 is a Load, and e2 observes the effect of e1, e1
happens before e2.
  - if e1 is a Store, e2 is not a Store, and e2 observes the effect of e1,
e1 happens before e2.
  - if neither operation is a Load or Store, either e1 happens before e2 or
e2 happens before e1.
"""
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 15, 2013

Comment 14:

Why don't you want to give up on data races?
We probably can ensure atomicity of word accesses in gc w/o sacrificing important
optimizations. But:
1. We can not ensure visibility guarantess, e.g. if a var is registrized in a loop, and
at this point racy accesses become almost useless.
2. Races are definitely not safe for maps and slices.
3. Most likely we can not ensure any guarantees for races in gccgo (not sure what gcc
java does here).
4. I do not see any benefits of allowing data races. Currently there is runtime cost for
calling atomic.Load instead of doing plain load. But this must be addresses by providing
better atomic operations with compiler support (if that becomes the bottleneck).
Allowing data races instead to solve this looks completely wrong.
If we prohibit data races, it would make reasoning about atomic operations much much
simpler.
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 15, 2013

Comment 15:

There are 2 litmus tests for atomic operations:
1.
// goroutine 1
data = 42
atomic.Store(&ready, 1)
// goroutine 2
if atomic.Load(&ready) {
  if data != 42 {
    panic("broken")
  }
}
2.
// goroutine 1
atomic.Store(&X, 1)
r1 = atomic.Load(&Y)
// goroutine 2
atomic.Store(&Y, 1)
r2 = atomic.Load(&X)
// afterwards
if r1 == 0 && r2 == 0 {
  panic("broken")
}
As far as I see you definition does not work for 2.
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 15, 2013

Comment 16:

For 2 to work, atomic operations (including loads and stores) must form total order.
Probably the following 2 clause definition can do:
(1) If an atomic operation A observes an effect of an atomic operation B, then B happens
before A.
(2) All atomic operations form a total order that is consistent with happens-before
relations, modification orders of individual atomic variables and intra-goroutine order
of operations.
(2) implies that values returned by atomic operations and their side effects are
dictated by the total order. I am not sure whether it's obvious or not.
Note that (2) does not introduce new happens-before relations. Even if you somehow infer
that A precedes B in total order (e.g. by using racy memory accesses), this gives you
nothing.
@rsc

This comment has been minimized.

Contributor

rsc commented Aug 15, 2013

Comment 17:

You wrote "why don't you want to give up on data races?". That's not what I
am trying to do. I am trying to avoid making atomic.Load and atomic.Store
unnecessarily expensive.
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 15, 2013

Comment 18:

It is more difficult to do in presence of data races.
W/o data races the following looks OK:
"Package sync/atomic provides access to individual atomic operations. These atomic
operations never happen simultaneously. That is, for any two atomic operations e1 and
e2, either e1 happens before e2 or e2 happens before e1, even if e1 and e2 operate on
different memory locations."
@rsc

This comment has been minimized.

Contributor

rsc commented Aug 15, 2013

Comment 19:

Okay, then define how to prohibit data races.
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 15, 2013

Comment 20:

It's simple. We need to replace:
--------------------------
To guarantee that a read r of a variable v observes a particular write w to v, ensure
that w is the only write r is allowed to observe. That is, r is guaranteed to observe w
if both of the following hold:
w happens before r.
Any other write to the shared variable v either happens before w or after r.
This pair of conditions is stronger than the first pair; it requires that there are no
other writes happening concurrently with w or r.
Within a single goroutine, there is no concurrency, so the two definitions are
equivalent: a read r observes the value written by the most recent write w to v. When
multiple goroutines access a shared variable v, they must use synchronization events to
establish happens-before conditions that ensure reads observe the desired writes.
The initialization of variable v with the zero value for v's type behaves as a write in
the memory model.
Reads and writes of values larger than a single machine word behave as multiple
machine-word-sized operations in an unspecified order.
--------------------------
with:
--------------------------
If there is more than one such w, the behavior is undefined.
The initialization of variable v with the zero value for v's type behaves as a write in
the memory model.
--------------------------
@robpike

This comment has been minimized.

Contributor

robpike commented Aug 20, 2013

Comment 21:

Is this converging?
@dvyukov

This comment has been minimized.

Member

dvyukov commented Aug 20, 2013

Comment 22:

Difficult to say. I would not hurry with this. Moving to 1.3.

Labels changed: added go1.3, removed go1.2maybe.

@robpike

This comment has been minimized.

Contributor

robpike commented Aug 20, 2013

Comment 23:

Labels changed: removed go1.3.

@rsc

This comment has been minimized.

Contributor

rsc commented Nov 27, 2013

Comment 24:

Labels changed: added go1.3maybe.

@rsc

This comment has been minimized.

Contributor

rsc commented Dec 4, 2013

Comment 25:

Labels changed: added release-none, removed go1.3maybe.

@rsc

This comment has been minimized.

Contributor

rsc commented Dec 4, 2013

Comment 26:

Labels changed: added repo-main.

@rsc rsc added accepted labels Dec 4, 2013

@quentinmit

This comment has been minimized.

Contributor

quentinmit commented Oct 10, 2016

@rsc, have you had time to study this?

@rsc

This comment has been minimized.

Contributor

rsc commented Oct 10, 2016

Yes, I spent a while on this last winter but didn't get a chance to write it up properly yet. The short version is that I'm fairly certain the rules will be that Go's atomics guarantee sequential consistency among the atomic variables (behave like C/C++'s seqconst atomics), and that you shouldn't mix atomic and non-atomic accesses for a given memory word.

@carl-mastrangelo

This comment has been minimized.

Member

carl-mastrangelo commented Feb 9, 2017

@rsc Would it be possible to push this to 1.9 Early?

@rsc

This comment has been minimized.

Contributor

rsc commented Feb 10, 2017

Possibly, although it shouldn't matter given the statement above. If the statement above misses some important detail you need to know, please let me know what that detail is. :-)

@TheCount

This comment has been minimized.

TheCount commented Apr 4, 2017

If you're desperate enough to break out the atomics, you may as well be very explicit about ordering issues. So: would it be possible to leave the current sync/atomic functions out of ref/mem entirely (i. e., leave them per se at the equivalent of C relaxed atomics) and instead add explicit barrier "functions" with special semantics to sync/atomic?

This would pick up on @tombergan's explicitness suggestion, except that, from the client's perspective, not the current atomics would be annotated, but explicit/verbose order barriers added. From the client's perspective, there would be a couple of new functions:

LoadBarrier(addrs ...interface{})
StoreBarrier(addrs ...interface{})

which don't let loads from and stores to, respectively, the specified addresses go past them.

The (B) and (C) examples of @dvyukov would then become

// goroutine 1 (producer)
data = 42
atomic.StoreBarrier(&data, &ready)
atomic.Store(&ready, 1)

// goroutine 2 (consumer)
if atomic.Load(&ready) != 0 {
    atomic.LoadBarrier(&data, &ready)
    assert(data == 42)
}

and

// goroutine 1
atomic.Store(&X, 1)
atomic.StoreBarrier(&X)
atomic.LoadBarrier(&Y)
r1 = atomic.Load(&Y)

// goroutine 2
atomic.Store(&Y, 1)
atomic.StoreBarrier(&Y)
atomic.LoadBarrier(&X)
r2 = atomic.Load(&X)

// afterwards
if r1 == 0 && r2 == 0 {
  panic("broken")
},

respectively.

From the implementor's point of view, it would have to be ensured that all goroutines sharing barriers for some memory location would observe each others' side effects in an order consistent with the program order of the barriers, and that within each goroutine, no reordering across barriers occurs for the specified locations.

This approach should be strong enough to distinguish (in C terminology) between relaxed and non-relaxed orders, plus maybe a bit more (it's late and I'd have to think about it for some time longer), and would, at the cost of some verbosity, be more flexible than @rsc's approach of specifying all atomics to be the equivalent of C sequentially consistent.

Of course, the compiler support part of this approach might be rather involved.

@rsc

This comment has been minimized.

Contributor

rsc commented Apr 5, 2017

Sorry, but no.

@poy

This comment has been minimized.

Contributor

poy commented Apr 7, 2017

Is the memory model "static"? If any changes were made then, would it not be considered a breaking change?

Put differently, if I write code using atomics, will the behavior differ between go versions?

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Apr 7, 2017

@apoydence There is no intent for the behavior of the sync/atomic package to change between Go versions. However, since that behavior is not precisely defined--that is what this issue is about--I don't see how that helps you.

@Sinacam

This comment has been minimized.

Sinacam commented Apr 9, 2017

If guaranteeing sequential consistency proves to be too expensive, should weaker atomic operations be provided in a sub-repository? Given that users of atomics are likely sensitive to performance.

Benchmark on a single producer single consumer queue. Low-level constraints corresponds to absence of sequential consistency, which breaks the second example of Comment 15.

Picture from CppCon 2014: Jeff Preshing "How Ubisoft Develops Games for Multicore - Before and After C++11"

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Apr 11, 2017

To be honest, I don't think that people who need to worry about the slow down inherent in using sequential consistency for atomic operations are going to choose Go as their programming language.

@TheCount

This comment has been minimized.

TheCount commented Apr 12, 2017

@ianlancetaylor This may be true today, but 10 years from now the CPU in your tablet will have less than 100 opcodes, 1024 hardware threads and minimal implicit consistency. At this point, almost everyone will need to worry about slowdowns inherent with unnecessary syncing.

Fortunately, this does not mean everybody has to understand some complicated memory model. But there has to be some way to provide inter-thread communication efficient in this sense to the mainstream programmer (doesn't have to be via atomics, could also be via easier-to-understand user-defined weakenings of the channel concept, etc., etc.). As of today, this is not possible within Go. If it remains impossible in Go, at some point mainstream programmers are indeed not going to choose Go as their programming language.

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Apr 12, 2017

Of course I agree that we need efficient inter-goroutine communication. That is one of the driving forces behind the development of Go in the first place. But as even you suggest, that does not imply that the right approach is to have atomic operations with relaxed memory models in the standard library.

For example, it is already the case that channel operations do not require sequential consistency.

@TheCount

This comment has been minimized.

TheCount commented Apr 14, 2017

Agreed. It totally escaped me that the channel implementation does not need sequential consistency. In this light, giving sync.atomic seq-cons semantics makes perfect sense: they complement channels.

@bradfitz bradfitz modified the milestones: Go1.10, Go1.9 Jun 13, 2017

@mrwonko

This comment has been minimized.

mrwonko commented Jun 22, 2017

Given this issue, I don't think the implementations of sync.RWMutex, sync.WaitGroup and atomic.Value are well-defined as they heavily rely on atomic operations.

@rsc

This comment has been minimized.

Contributor

rsc commented Jun 22, 2017

The semantics of RWMutex are well-defined since they are documented in the Memory Model doc. The implementation's job is to provide those semantics, and we're confident that it does. We can't justify that assertion from the memory model alone, but that's OK. It's OK that for reasoning about that specific implementation, we know how the underlying pieces work.

You're right that sync.WaitGroup is missing from the memory model doc, but it establishes all the happens-before edges one would expect.

@robaho

This comment has been minimized.

robaho commented Aug 13, 2018

It seems that without "happens before" semantics it is very hard if not impossible to write user defined lock-free structures that have any real value...
The Java memory model defines "happens before" for all of the strong atomic operations, and this allows for advanced lock-free structures/queues/etc.

@rsc

This comment has been minimized.

Contributor

rsc commented Aug 14, 2018

Quoting #5045 (comment) above:

Yes, I spent a while on this last winter but didn't get a chance to write it up properly yet. The short version is that I'm fairly certain the rules will be that Go's atomics guarantee sequential consistency among the atomic variables (behave like C/C++'s seqconst atomics), and that you shouldn't mix atomic and non-atomic accesses for a given memory word.

It's been two years and I still haven't gotten a chance to write this up properly, but it's getting close to the top of my work queue. The two years of on-again, off-again discussions have essentially confirmed this approach. People already write code assuming basically these semantics - I can't imagine doing something weaker than this from the "what you can rely on" point of view. So I don't think anyone should be blocked on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment