Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upDon't make atomic loads and stores volatile #30962
Conversation
rust-highfive
assigned
alexcrichton
Jan 16, 2016
This comment has been minimized.
This comment has been minimized.
|
(rust_highfive has picked a reviewer for you, use r? to override) |
This comment has been minimized.
This comment has been minimized.
|
I do not think we can make this change anymore. This has potential (and probably will) silently, and horribly, break certain programs (e.g. those that rely on atomics as a stable way to do volatile reads/stores). We should make new intrinsics+functions for non-volatile operations instead. (I think we also had an issue/rfc somewhere to add volatile/non-volatile differentiation for atomic operations). |
This comment has been minimized.
This comment has been minimized.
|
Does any code actually rely on this? The volatile semantics of atomic types were never documented anywhere and any code that would need it for MMIO is using nightly which has volatile read/write intrinsics. Note that the volatile semantics only applied to atomic load and store, but not Making atomic types have volatile semantics on some operations is a terrible default behavior to have, especially since it hurts compiler optimizations. |
This comment has been minimized.
This comment has been minimized.
The only way to really find out is to break a stable release and wait for complaints,
I’ve seen atomic store/load recommended by, I think, @huonw (?) as a stable replacement for volatile loads and stores, at least once.
You needn’t volatile CAS/RMW at all if all you’re doing is writing and reading bytes out of a hardware port. Basically, one might rely on the volatility behaviour and not atomicity behaviour and ignoring all of the more complex atomic operations for their use cases.
I do not disagree, I’m simply pointing out the potential hazard of making these ops non-volatile, now that their volatility, albeit undocumented, is stable. |
This comment has been minimized.
This comment has been minimized.
|
Any code relying on atomics to be volatile is very misguided and is likely broken in other ways too. It is pretty well known that volatile is orthogonal to atomic operations. |
This comment has been minimized.
This comment has been minimized.
Nope, although I did recommend against it in the thread on reddit, so you may've just mixed up the user names. :) On that note, this change seems like the right choice to me. cc @rust-lang/libs and especially @aturon. |
This comment has been minimized.
This comment has been minimized.
|
This seems reasonable to me, but I have to admit I don't fully grasp the subtleties around volatile accesses in LLVM. I think @rust-lang/compiler may be interested as well. |
This comment has been minimized.
This comment has been minimized.
|
Definition of LLVM's volatile, for reference: http://llvm.org/docs/LangRef.html#volatile-memory-accesses
|
This comment has been minimized.
This comment has been minimized.
|
I have a feeling this is why Arc couldn't be trivially pwned by a mem::forget loop. A Sufficiently Smart compiler should be able to optimize an Arc mem::forget loop into a single atomic add (I think), but it never did. |
This comment has been minimized.
This comment has been minimized.
|
To me this seems "correct" in a void (e.g. atomics should not be volatile by default), and along those lines I think we should pursue what we need to do to make this change. Unfortunately crater would not be great at evaluating a change such as this, but this is also why we have a nightly and beta period (e.g. the change will bake for ~12 weeks). I would personally be in favor of merging close to after we branch the next beta to give this maximal time to bake, and we should be ready to back it out if necessary, but like @Amanieu I would expect very little breakage. |
This comment has been minimized.
This comment has been minimized.
|
An alternative, if it does cause breakage, would be to simultaneously add |
This comment has been minimized.
This comment has been minimized.
|
Note that sequentially consistent atomic operations provide guarantees that are a strict superset of |
This comment has been minimized.
This comment has been minimized.
That's not actually true. For example a compiler is allowed to optimize this:
into
only if the operation is not volatile. Volatile semantics are only really useful for memory-mapped I/O where the compiler must preserve the exact sequence of loads and store in the program without merging or eliminating them. |
This comment has been minimized.
This comment has been minimized.
|
Hm, you're right, "no functional change" was too strong of an assertion. Still, I think a look at downstream code would be in order. |
This comment has been minimized.
This comment has been minimized.
|
This is a tricky case. Given that the docs did not promise volatile On Mon, Jan 18, 2016 at 9:25 PM, whitequark notifications@github.com
|
This comment has been minimized.
This comment has been minimized.
|
In practice, I expect there to be no functional change at the moment: compilers are generally pretty hands-off for atomics. For example, rustc doesn't optimise @Amanieu's example to a single instruction, and neither do any of gcc, clang or icc (for the C++11 equivalent below); it's always two #include<atomic>
void foo(std::atomic<long>& x) {
x.fetch_add(2, std::memory_order_seq_cst);
x.fetch_add(2, std::memory_order_seq_cst);
}This is true even after pushing the ordering all the way down to relaxed. And, e.g., LLVM's performance tips say:
(That's not to say there aren't examples that are optimised, but I haven't seen anything non-trivial: even |
brson
added
the
relnotes
label
Jan 20, 2016
This comment has been minimized.
This comment has been minimized.
|
I also think we can do this as a bugfix, but we should advertise it. |
This comment has been minimized.
This comment has been minimized.
|
Given @huonw's comment, seems like there's no reason not to make the change On Tue, Jan 19, 2016 at 7:15 PM, Brian Anderson notifications@github.com
|
sfackler
added
I-nominated
T-libs
T-compiler
labels
Jan 21, 2016
This comment has been minimized.
This comment has been minimized.
|
Nominating for discussion by the libs and lang teams. Sounds like people are on board with this, but it's probably a good idea to make sure everyone's are aware of what's going on. |
sfackler
added
T-lang
and removed
T-compiler
labels
Jan 21, 2016
This comment has been minimized.
This comment has been minimized.
|
DIdn't the libs team already agree that this is fine? |
aturon
removed
the
I-nominated
label
Jan 30, 2016
This comment has been minimized.
This comment has been minimized.
|
The lang team is on board with this change. The main remaining question is whether to try to offer this functionality by some other means before landing the change (as suggested by @alexcrichton). |
This comment has been minimized.
This comment has been minimized.
briansmith
commented
Feb 2, 2016
FWIW, I agree with the above.
I don't think it is worth the effort. It is early days for Rust. People need to write code based on what is guaranteed on the language spec/docs, not based on what the compiler accidentally does. This change isn't going to break anybody who wrote correct code in the first place. |
This comment has been minimized.
This comment has been minimized.
|
The libs team discussed this during triage today and the conclusion was that everyone seems on board, and the only question is whether we need to also add Thanks @Amanieu! |
Amanieu commentedJan 16, 2016
Rust currently emits atomic loads and stores with the LLVM
volatilequalifier. This is unnecessary and prevents LLVM from performing optimization on these atomic operations.