New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a generic Atomic<T> type #1505

Closed
wants to merge 5 commits into
base: master
from

Conversation

Projects
None yet
@Amanieu
Contributor

Amanieu commented Feb 21, 2016

@ticki

This comment has been minimized.

Contributor

ticki commented Feb 21, 2016

Dup of #1477

@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Feb 21, 2016

It's not exactly a duplicate. The proposed approach is completely different. #1477 only talks about changing unstable intrinsics while this RFC proposes adding a new stable atomic API.

@ticki

This comment has been minimized.

Contributor

ticki commented Feb 21, 2016

Right, that's true. It might be worth mentioning #1477?

Nonetheless, I 👍 this, since this will make everything much easier.

@Valloric

This comment has been minimized.

Valloric commented Feb 22, 2016

For people coming from C++ (which is Rust's target audience), there needs to be an equivalent of std::atomic. So I welcome this RFC.

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Feb 22, 2016

Thanks for the RFC @Amanieu! I've also wanted this from time to time, although there's a number of questions I've had that I unfortunately haven't been able to answer in the past. I'm curious as to your thoughts on:

  • What does the compiler do on Atomic::<[u8; 4096]>::load()? This can either happen directly or through a very long chain of monomorphizations, and this RFC means that we'll be introducing a vector for monomorphization-related errors. I could, for example, expose an abstraction where just by linking a crate and instantiating some types I get a super obscure error. We've done our best in the compiler to avoid this so far.
  • I'm not certain that all platforms even support atomic operations with the size different than the pointer size. For example I wouldn't be confident in saying that all platforms support byte-level atomics (they might though!). This RFC indicates that we will guarantee this operation up to the pointer size, but do you know if this applies to all possible architectures we could port to?
  • From an API perspective, the differences in API surface area means that this won't be a very usable type if you only take Atomic<T>. If any consumer is taking Atomic<isize> then it seems not that different from taking AtomicIsize? Along these lines, I found it tough to motivate to myself having a generic type. You mention that this RFC is motivated by double-word-size atomics as well as known-32-bit ones, but is there a benefit to a <T> vs AtomicT? (note that not being generic would solve the first question I had).
  • Currently the standard library is principled in the sense of all exported types are those supported on all platforms. We have a system for platform-specific types (the std::os::* modules), but we don't currently have a system for architecture specific modules. I think that we'd want to try to avoid using platform-specific types without any sort of opt-in (like importing a std::os module). This would help audit a code base for what's using platform-specific operations. It sounds like, though, some of these types (or at least some monomorphizations) would be platform-specific, which may be unfortunate.

Curious to hear what you think about these!

@huonw

This comment has been minimized.

Member

huonw commented Feb 23, 2016

I feel like the bitwise comparison is more significant than what's written here (which is differing struct padding). In particular, Rust generally avoids thinking about object identity, instead focusing on actual values/semantic comparison. Thus, compare_exchange doing a bitwise/identity comparison could lead to surprising results since the API isn't matching broader Rust expectations. For example, the compare_exchange in following example may fail:

let atomic = Atomic::new(&1);
let old = &1;
assert!(atomic.load() == old); // passes
atomic.compare_exchange(old, &2); // fails

(The orderings don't matter, and this may not compile as written due to lifetimes, and optimisations may use the same pointer for the two &1s, but one can easily come up with a small perturbation of that core example that will demonstrate the problem in practice. I'm using that simple form to focus on the underlying problem.)

I idly wonder if it might be better to have T in Atomic<T> (or maybe just a bound on T for the compare_exchanges) bounded by an additional trait representing that bitwise comparisons are OK, which would cover all the existing atomic types, and more, but not types like references. This trait would basically refine PartialEq or Eq, like Copy refines Clone, and we could have struct BitwiseCmp<T>(T); to force bitwise comparisons. The trait bound means the above example won't compile directly. If one definitely wants that behaviour, one uses the BitwiseCmp struct hence being explicit that something unusual might be happening:

let atomic = Atomic::new(BitwiseCmp(&1));
let old = BitwiseCmp(&1); 
atomic.compare_exchange(old, BitwiseCmp(&2));

I'm on the fence about this:

  • on one hand, we haven't necessarily had the best experience with fancier stuff like this (e.g. Eq refining PartialEq can be confusing for newcomers, and many people aren't big fans of the recover-safety stuff); but
  • on the other, the more specialised nature of this situation likely means use of a wrapper like BitwiseCmp is reasonable rare, because one is either using types which naturally do bitwise comparisons (integers, raw pointers) or is using a special type(s) (like crossbeam's Shared) for which one can easily implement the trait.

Also, some thoughts I had while writing this:

  • mismatching comparisons behaviours could lead to unsafety, i.e. one sees the compare_exchange succeed, and uses that to assume some property of a value (i.e. it's equal to something else) that is then used in unsafe code. However, this seems like it would generally occur in a positive sense (i.e. relying on equality, not inequality), and the reasoning works fine in these cases if bitwise equality implies semantic equality.
  • bitwise equality doesn't imply semantic equality in all cases, both in theory (one can implement PartialEq in anyway, including rand()) and in practice (float NaNs with identical bit patterns don't compare equal). This means that a condition like if atomic.compare_exchange(old, new) == old could possibly never succeed if old somehow gets set to NaN (note that this can't/shouldn't be optimised down to use the direct success/fail indicators the instructions have, like the zero flag on x86, because that would change behaviour). However, I imagine using atomic floating point values is rare, getting a NaN atomic float is rarer, and, in terms of the worst consequences, relying on a float value for unsafe seems like it would be rather unusual.

(These are both addressed by having a bound more specific than Copy.)


Lastly, diving into pedant-land, some types have undefined values (bit patterns that should never occur, or are only valid sometimes) such as references and enums. These types need to be careful that it is only possible to read values that have been written to them, and certain uses of Relaxed memory orders don't prohibit out-of-thin-air reads that return a "random" value e.g. 1, 2 (this second one implies that C++ is outlawing OoTA reads, but I'm lead to believethat the approach for doing this without inhibiting optimisations makes the standard incoherent/inconsistent). However, this doesn't occur in practice, and so can probably be ignored.

@huonw

This comment has been minimized.

Member

huonw commented Feb 23, 2016

I'm not certain that all platforms even support atomic operations with the size different than the pointer size. For example I wouldn't be confident in saying that all platforms support byte-level atomics (they might though!). This RFC indicates that we will guarantee this operation up to the pointer size, but do you know if this applies to all possible architectures we could port to?

Do we need to guarantee that Atomic<u8> is literally operating on bytes? I.e. can we pad it out to Atomic<usize>? I suppose this may make an operation like atomic.fetch_add(1) more complicated/expensive to handle the overflow case, and may be a foot-gun in that *const u8 can literally not be converted to a *const Atomic<u8> in place (although this seems like a bad idea anyway).

From an API perspective, the differences in API surface area means that this won't be a very usable type if you only take Atomic. If any consumer is taking Atomic then it seems not that different from taking AtomicIsize? Along these lines, I found it tough to motivate to myself having a generic type. You mention that this RFC is motivated by double-word-size atomics as well as known-32-bit ones, but is there a benefit to a vs AtomicT? (note that not being generic would solve the first question I had).

One might want to be working with some "random" type, e.g. a state-machine enum, or a wrapper for type safety. For instance, crossbeam's internal Atomic is a wrapper around AtomicPtrs, but all the raw pointers it stores are really Shareds, and hence being able to store a Atomic<Shared<...>> instead would reduce the amount of "unnecessary" unsafe that's just doing conversions between Shared-stored-as-raw-pointers and Shared itself (edit: actually, this isn't true, Shared has a lifetime that isn't linked to the data structure itself, so the situation is more complicated/Shared isn't the in-memory format.)

Also, for double-word atomics, maybe one has (*mut _, usize) or maybe (isize, usize) or some other combination of values, and we definitely won't want to have Atomic*s even for the 4 combinations of *mut and usize let alone including *const and isize.

@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Feb 23, 2016

  • Platform support for non-pointer-sized atomics isn't an issue. The only number that matters is the largest size that a platform can perform a compare_exchange on. All smaller operations can be emulated using that. For example a byte atomic can be emulated by using a compare_exchange loop that only modifies a single byte of the value. This is actually how LLVM implements byte-level atomics on MIPS, which only supports word-sized atomics native. Note that the out-of-bounds read is fine here because atomics are aligned and will never cross a page boundary. Therefore it is perfectly safe to have sizeof(Atomic<u8>) == 1 on MIPS.
  • Regarding monomorphization: the specific constraint that we are looking for is sizeof(T) <= target_max_atomic_size. I don't think there is any good way of integrating this with the traits system. Consider struct S<T>(T, T): how can you describe whether S<T> can fit inside an atomic? Unfortunately I don't know of any good solution for this. We could go with the C++ approach of automatically falling back to a lock-based implementation of atomics, which is implemented in compiler-rt, however we then need to add an API to determine whether a certain atomic type is 'lock-free'. This property is important for dealing with interrupts or signal handlers where the use of locks can result in a deadlock. Also note that the compiler-rt implementation will not work in an embedded/kernel environment.
  • I disagree that Atomic<T> isn't a very usable type. The load, store, swap and compare_exchange methods are available for all atomic types. All of the extra methods on the specialized bool and integer atomic types are available for convenience and/or performance. Any atomic operation can be emulated using a compare_exchange loop. The specialized implementations of Atomic<T> for bool and integer types mirror the implementation of std::atomic<T> in C++, which has specializations for those types.
  • I don't think bitwise comparisons are a big issue as long as we make this clear in the documentation. compare_exchange in C++ is defined as performing a comparison in terms of memcmp, even if T has an overloaded comparison operator. However I propose that we offer slightly stronger guarantees than C++ regarding padding bytes. This is because Rust has an implementation-defined struct layout, which means we could in theory be inserting padding behind the user's back, which would break comparisons.
  • Another note regarding the monomorphization and bitwise comparison issues: keep in mind that atomic types are generally used as a low-level implementation detail, which means that T will usually be defined within the same module, and almost certainly from the same crate. Creating an Atomic<T> with a T that comes from outside the crate will be an extremely rare occurence.
@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Feb 23, 2016

Hmm actually it seems that we can't always guarantee that values up to the pointer size are guaranteed to be atomic. This is only an issue for platforms that don't support atomic operations natively, such as ARMv5.

On ARMv5, AtomicUsize::fetch_add is compiled to a call to __atomic_fetch_add_4, which is implemented in compiler-rt using spinlocks. This may violate the guarantees we have for AtomicUsize.

But my point here is, what should we do about architectures like ARMv5 that don't support native atomic operations at all? Should we disallow atomics entirely? Should we silently fall back to lock-based implementations?

@jmesmon

This comment has been minimized.

jmesmon commented Mar 6, 2016

Expanding a little on @Amanieu's notes about locks above:

In considering this, it's probably important to understand how C++ handles a similar question in it's std::atomic<T>.

C++ considers atomic and lockfree 2 separate questions: all T can be used in std::atomic<T>, but not all std::atomic<T> are lockfree. One can determine if a given T will be lockfree in an std::atomic by asking std::atomic_is_lock_free.

This separation is done to allow users to write code that uses std::atomic without needing customization on a per-platform basis (platform customization is handled in libstdc++ rather than user code). Essentially, it allows taking advantage of platform features where available, but avoids needing to specialize code on a per platform basis (an admirable goal).

For a generic atomic to make sense in rust (much like in C++), we need it to work for all types. Platform based restrictions here would greatly hamper usability of the standard library.

It might be useful to have lockfree-only variants for use in problematic areas as noted by @Amanieu above (interrupt handlers, signal handlers) but I'm not sure rust is very well specified in those areas to begin with. To take the case of signal handlers, POSIX has a different (and incredibly more restrictive) set of rules there than it does for separate threads. At some point we'll need to handle them, but right now we lack any rules about them (AFAIKT), so I don't think it would be a good idea to allow them to restrict Atomic<T>.

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Mar 7, 2016

@Amanieu

Ah sorry for being a little slow to respond, but thanks for the responses! I hadn't actually considered implementing Atomic<u8> in terms of word-sized reads/writes before, but that does seem to be a reasonable implementation to me. To me at least it seems like quite an important semantic of the implementation, however, so could you add that to the RFC?

I would personally not want the standard library to back atomics with mutexes, it's too sweet of a name to silently switch to something so heavyweight behind the scenes (easy to forget). I also agree that threading this through the trait system is probably not possible, but it's also a serious drawback in my mind in the sense that I would personally consider it a non-starter.

I guess it's true, yeah, though that compare_exchange plus some trait bounds on T could indeed be useful in terms of implementing a general algorithm. Seems good to me!

@alexcrichton alexcrichton added the T-libs label Mar 7, 2016

@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Mar 8, 2016

@alexcrichton I've added a note about this to the RFC. Note that this emulation is done transparently by LLVM, so there is nothing we need to do on the Rust side to support it.

I would personally not want the standard library to back atomics with mutexes, it's too sweet of a name to silently switch to something so heavyweight behind the scenes (easy to forget). I also agree that threading this through the trait system is probably not possible, but it's also a serious drawback in my mind in the sense that I would personally consider it a non-starter.

I only see two approaches for supporting generic atomic types:

  • Fail at monomorphization time for over-sized types, since the size constraint can't be represented using the trait system. Users will need to use #[cfg] to determine if the appropriate atomic type is supported.
  • Automatically fall back to a mutex-based implementation for over-sized types. Note that even in this situation we can still guarantee sizeof(Atomic<T>) == sizeof(T) by having the mutex in an external array (which is what compiler-rt currently does).

Other approaches (in this thread) only support implementing atomic operations for specific types, such as i32/i64/i128. This still doesn't solve the problem of some types only being available on certain architectures (or even different versions of the same architecture).

@Valloric

This comment has been minimized.

Valloric commented Mar 8, 2016

I would personally not want the standard library to back atomics with mutexes, it's too sweet of a name to silently switch to something so heavyweight behind the scenes (easy to forget).

I can't agree with this. "Atomic" has a semantic meaning that doesn't mean "native CPU instructions for atomic operations." Those just happen to be used if available.

There's also a solid argument to be made that if C++'s std::atomic falls back to mutexes, that behavior is likely to be good enough for Rust too.

@soltanmm

This comment has been minimized.

soltanmm commented Mar 8, 2016

[...] it's too sweet of a name to silently switch to something so heavyweight behind the scenes (easy to forget).

@alexcrichton Wouldn't LockFree be an even sweeter name for lock-free data types? For just two characters more you get disambiguation and unblock yet other sweetness. 🍯 🍰 🍭

Really, though, I think the semantics for the identifiers (A|a)tomic have already been determined by the present zeitgeist in the vein of C++. Rust may not be C++, but here it's borrowing an identifier already in vogue.

@jmesmon

This comment has been minimized.

jmesmon commented Mar 8, 2016

too sweet of a name

Atomic describes how accesses behave, but not how they are implemented. It is the intersection between Mutex and LockFree, specialized based on the type requested and the system on which it is running.

I think it would be more of a waste to make it equivalent to LockFree when it could be so much more.

Edit: it's also good to note that if we'll need something that can fallback to allow people to write cross platform code that is able to take advantage of platform capabilities (cas for some sized types, etc). If Atomic isn't it, we'll need to think of another name for it.

@Valloric

This comment has been minimized.

Valloric commented Mar 8, 2016

The JVM atomic types (AtomicBoolean, AtomicInteger etc.) also fall back to locking on some platforms:

The specifications of these methods enable implementations to employ efficient machine-level atomic instructions that are available on contemporary processors. However on some platforms, support may entail some form of internal locking. Thus the methods are not strictly guaranteed to be non-blocking

There's quite a bit of precedent behind making "atomic" mean "lock-free if possible, but falls back to locks if needed."

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Mar 8, 2016

I'm personally very weirded out if we ever pull in a huge amount of support for compiler-rt. That's a dependency pulled into all rust projects in existence so we need to control it and understand it as much as possible. I would very much prefer to write any support we need in Rust itself so we understand the pain we're going through to do that. To me having an array of global locks is distasteful enough that we shouldn't export it as a cross-platform interface in the standard library, for example.

It's basically just my own personal opinion that atomics should map to instructions, not mutexes. I haven't done much with atomics in C++, but I would personally avoid them in situations where they fell back to mutexes.

To me at least it seems like a cross-platform Atomic<T> story is bleak enough that we may not want to pursue it at this time. We may want to start thinking harder about how to have architecture specific types which would allow us to add a suite of types like this.

@Valloric

This comment has been minimized.

Valloric commented Mar 8, 2016

I'm personally very weirded out if we ever pull in a huge amount of support for compiler-rt. That's a dependency pulled into all rust projects in existence so we need to control it and understand it as much as possible.

Fair point, but using compiler-rt is entirely orthogonal to the question "should Rust have an Atomic<T> type that falls back to a mutex on some platforms."

Like you, I too would prefer a Rust solution.

It's basically just my own personal opinion that atomics should map to instructions, not mutexes. I haven't done much with atomics in C++, but I would personally avoid them in situations where they fell back to mutexes.

That seems like a fine personal/per-project decision, but not a great "global" one. Lots of people want a type that means "atomic access that's as fast as possible, but always correct" which is what std::atomic and the Java atomics provide.

It makes complete sense that some projects want something else, which would be direct use of a LockFree<T> type that never falls back to a mutex but isn't available everywhere (and which would limit the portability of their code, but that's their choice).

To me at least it seems like a cross-platform Atomic story is bleak enough that we may not want to pursue it at this time.

This runs counter to the conclusions of the C++ standards committee and the engineers who work on the JVM. There is a direct need for a type like this (as evidenced by C++, Java and this RFC), and Rust should have it. There's a stark difference between "I don't need this" and "nobody should ever need this."

Like I mentioned above, having an Atomic<T> type that behaves the same way similar constructs work in Java & C++ does not preclude having a LockFree<T> type as well.

@comex

This comment has been minimized.

comex commented Mar 10, 2016

The global mutex thing feels very strongly like something that should live in a library to me. In fact, I think it could be done in regular Rust code today. Each atomic method would at "runtime" switch on the return value of size_of, transmute to the appropriate pointer type, and do the atomic operation on that; LLVM would remove the unnecessary branches. For unsupported sizes, you'd fall back to mutexes. (You’d have to bake in assumptions about what sizes of integer can be subject to atomic operations, but there are many ways Rust could convey that static information to code in the future.)

The case this approach falls down in is if you don't want the global mutexes and thus want an unsupported size to produce some sort of error at trans time. I guess you could hack it with asm!...

@ticki

This comment has been minimized.

Contributor

ticki commented Mar 10, 2016

I feel like having Mutex as fallback introduces a hidden cost. It takes away the very purpose of atomic values.

@main--

This comment has been minimized.

main-- commented Mar 10, 2016

It takes away the very purpose of atomic values.

I strongly object. While every "atomic" operation that falls back to the mutex now involves multiple hardware operations, it's still very definitely atomic in the sense of the word as intermediate states are not observable from safe rust. Similarly, you could argue that x86's atomic instructions aren't really atomic because they may cause bus locking.

The alternative of erroring out may look like a speedbump against accidental mutexes, but I'd argue that this works poorly in practice: Nowadays, with x86 everywhere, Rust projects are rarely going to be developed on multiple platforms, so many developers probably wouldn't hit the error that their code is incompatible with e.g. ARM. This introduces a huge portability hazard: You'd be unable to use crates unless the author specifically thought of your architecture (or you're lucky, of course). It essentially blows up the ecosystem into many tiny fragments. Falling back to a mutex here makes things slower, but it works. A potential performance footgun is an easy sacrifice to make when it allows us to maintain something as big as cross-platform compatibility - especially as I'd expect atomic variables to mostly be used by experienced programmers anyways who understand the pitfalls.

Now when a crate's developer does care about platforms where the fallback would occur, they can just branch on Atomic<Foo>::is_hardware_accelerated() (or whatever) to do something faster instead and LLVM would optimize the other path away. Now that I think of it, the proposed cfg conditionals are probably better as they allow customized data structures as well - although a classic if-else like here always feels a little awkward to me with cfg (because you're forced to repeat the condition).

tl;dr Mutex fallback is unfortunate but I consider it unavoidable.

@glaebhoerl

This comment has been minimized.

Contributor

glaebhoerl commented Mar 10, 2016

Is there any reason we couldn't put the Mutex inside the Atomic in the cases where the mutex fallback is required, instead of the global mutex array thing? That seems like the normal and obvious thing to do. I assume there must be some reason why the existing implementations had to go with the less-obvious solution instead but I'm not aware of what it is.

@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Mar 10, 2016

The downside of including a Mutex inside the atomic is that you won't be able to transmute &Atomic<T> to &T, which might be necessary for passing it to the futex system call for example.

Also another reason is that, in C++, an atomic can sometimes be lock free if it depends on certain runtime features from the kernel (for example on ARMv5 the kernel provides a callable cmpxchg function at a certain address in memory). In that case you can't know in advance whether the atomic is going to be lock-free or not. libgcc takes advantage of this, but compiler-rt doesn't and always falls back to mutexes.

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Mar 11, 2016

@Valloric

There is a direct need for a type like this (as evidenced by C++, Java and this RFC), and Rust should have it. There's a stark difference between "I don't need this" and "nobody should ever need this."

Ah sorry I think I'm not communicating my preference here clearly. The precedence of other languages doesn't mean much to me in terms of what the API of the standard library should look like, but rather what it should have. I agree that concrete types like AtomicU{8,16,32,64} seem quite useful but to me they belong in an architecture specific module rather than std::sync::atomic.

I'm all for providing primitives, but having a heavyweight runtime implementation with global mutexes or out of bounds reads to me sounds like a library implementation detail, not something exported as an atomic primitive (as those implementations indeed aren't primitive).


@main--

It essentially blows up the ecosystem into many tiny fragments

To me this is true, but no more so than the Unix/Windows support in the standard library. We do a "best effort" to provide cross platform APIs, but whenever you need something specific we provide the ability, it just needs to be forcibly opted into.

For example maybe these types are only available in std::arch::x86 or std::arch::x86_64. A library could then provide atomics which fall back to mutexes (with all the crazy logic for doing the right thing on each architecture), but in my opinion the standard library shouldn't be doing it right now.

@jmesmon

This comment has been minimized.

jmesmon commented Mar 11, 2016

global mutexes or out of bounds reads to me sounds like a library implementation detail

That is a library impl detail. The a user of Atomic wants atomic operations as fast as the platform can provide them. Sometimes mutexes (or some other lock-like item) will be the fastest way.

I'm all for providing primitives, but [...]

I think the problem is thinking about this as a guaranteed primitive. It isn't (or at least, shouldn't need to be). It is a generic mechanism that may take advantage of those primitives (where available) in a convenient and portable way.

A library could then provide atomics which fall back to mutexes

Potentially, though this is made less-good by the lack of the type system knowing the size of a type (and probably forces a switch for supported sizes).

And we could say "should be in a library" about nearly everything in rust. I'd expect that std should at least attempt to provide a platform abstraction (it does for Mutex and friends, after all). What makes Atomic<T> not appropriate where Mutex<T> is?

@reem

This comment has been minimized.

reem commented Mar 11, 2016

I agree strongly with @alexcrichton here - a heavyweight fallback seems distinctly unrusty to me. It masks the performance implications of atomic operations, doesn't map clearly to a system primitive, and is likely to have odd differences in behavior.

I also feel very nervous about adding the possibility for a compile failure at monomorphization time! Checked generics (generics that are checked at the definition site not the usage site, like C++ templates) are one of rust's best features in my opinion. Violating this property is a very slippery slope to go down, and we've resisted adding other more useful features, like type-level numbers (which would allow implementing this whole scheme outside of std), without a solution to this problem.

I also agree with @alexcrichton that we should just provide zero-cost architecture specific atomics in architecture specific modules and allow library authors to play with heavyweight fallbacks.

EDIT: Just to add an example of where this sort of magic would be bad for users, imagine you are writing a signal handler that access a global atomic. If we where to provide a mutex fallback on some platforms, this code could deadlock on those architectures, which is extremely unexpected behavior for an atomic primitive.

EDIT2: I would also like to echo @alexcrichton again in saying that the availability of this type with similar semantics in other languages is not a convincing argument on its own - there's plenty of stuff in both Java and C++ std libraries that are not at all appropriate for rust std.

@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Mar 11, 2016

@alexcrichton so I take it you would prefer something similar to this:

#[cfg(target_has_atomic = "8")]
struct AtomicI8 {}
#[cfg(target_has_atomic = "8")]
struct AtomicU8 {}
#[cfg(target_has_atomic = "16")]
struct AtomicI16 {}
#[cfg(target_has_atomic = "16")]
struct AtomicU16 {}
#[cfg(target_has_atomic = "32")]
struct AtomicI32 {}
#[cfg(target_has_atomic = "32")]
struct AtomicU32 {}
#[cfg(target_has_atomic = "64")]
struct AtomicI64 {}
#[cfg(target_has_atomic = "64")]
struct AtomicU64 {}
#[cfg(target_has_atomic = "128")]
struct AtomicI128 {}
#[cfg(target_has_atomic = "128")]
struct AtomicU128 {}

Note that the existing atomic types are already dependent on the target architecture supporting atomic operations. It just so happens that all currently supported targets (tier 1, 2 and 3) all support atomic operations natively.

@jmesmon

This comment has been minimized.

jmesmon commented Mar 11, 2016

@reem

which is extremely unexpected behavior for an atomic primitive.

Yes, this is why LockFree was brought up way back in the comment thread. To have proper support for signal handlers, we'd want something restricted to LockFree operations. That doesn't mean we can't also have a completely generic Atomic.

availability of this type with similar semantics in other languages is not a convincing argument on its own

Which is why there have been other arguments. On the other side of this, we should be careful not to ignore why other languages chose to do things in some manner. We've got a lot of resources we can take advantage of by looking at other language designs.

@main--

This comment has been minimized.

main-- commented Mar 11, 2016

@alexcrichton Note that my worries are specifically about a generic atomic type. With the approach you have in mind where concrete types are available under architecture-specific modules, I do agree that the issue is much less severe as programmers would then explicitly opt in to cross-platform incompatibility.

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Mar 11, 2016

@jmesmon

And we could say "should be in a library" about nearly everything in rust.

Certainly! Here we're talking about the standard library, however, rather than just any old library. The standard library, in my opinion, is where you go for types that have clear and concise implementations and semantics. For example a Mutex<T> has an obvious implementation, and it's quite clear what the semantics are going to be.

An Atomic<T> backed by a mutex has very different semantics than if it's an atomic operation in terms of instructions. Signal handlers are a good example here, you could imagine others (like concerns with green threading perhaps) where a mutex is unfortunately inappropriate.

I certainly agree that there are many users who "just want the fastest thing" or "just want an AtomicU64", but this is what I think crates.io is for. The library would probably have something about locks in the name and be super clearly documented as having fallback implementations on various architectures.


@Amanieu

so I take it you would prefer something similar to this:

Along those lines, yeah! I'd specifically be thinking of having architecture-specific modules (similar to std::os::{macos, linux, windows, ...} to have all the concrete types:

std::arch::x86::{AtomicU8, ..., AtomicU32};
std::arch::x86_64::{AtomicU8, ..., AtomicU64};
std::arch::arm::{AtomicU32};

And we may be able to extend that to so we have a std::os::unix-like module which encompasses a bunch of platforms:

std::arch::pointer_width_32::AtomicU32;
std::arch::pointer_width_64::AtomicU64;

Note that the existing atomic types are already dependent on the target architecture supporting atomic operations. It just so happens that all currently supported targets (tier 1, 2 and 3) all support atomic operations natively.

This is certainly quite interesting to me! This is part of where I feel like our story on other architectures is a little lacking. I feel like we don't have a great idea what's going on here. Do you have some specifics we could dig into though? On one hand the ship has sailed here (the Atomic{U,I}size types are stable), but it's important to keep in mind!

Some questions I might have are:

  • What platforms do we support that don't have atomic operations at all? Is it just ARMv5?
  • Do platforms we support not have atomic pointer-sized operations?
  • What platforms fall back to compiler-rt for support of atomic operations?

I'm getting more and more uncomfortable about what operations are silently translated to calls into compiler-rt. We basically have no idea when that library is used or what we need from it. Not to mention it's a huge PITA to build almost everywhere...

@comex

This comment has been minimized.

comex commented Mar 11, 2016

Alternately, I don't see why it couldn't be implemented through the trait system:

Have magic traits trait Sizeof<const size: usize> and ditto Alignof. This could either be automatically implemented for all types (ala auto traits) or, preferably to me, require some explicit attribute to enable. Either way, for generic types it wouldn't require explicitly writing out under which conditions you have which size, since that's platform dependent and hard to write precisely; the compiler would just implement the traits based on the real calculated size. In other words, you're basically making the exact contents of the type a potentially breaking API detail, modulo small changes that will never change the machine layout, like changing one pointer type to another. This is okay, at least with an explicit attribute, because most of the small types you'd expect atomics to work on are never going to change layout, e.g. all smart pointers are always going to be pointer-sized. You'd have to be careful with the extent to which Option layout optimizations and such are stabilized, though. (That said, atomics are not the only use case for explicit sizing - with specialization it may occasionally be beneficial to have optimized container layouts for specific sizes of inner type, but that too would usually be relevant for types at most, say, twice pointer sized.)

Implementation-wise, this would require doing type memory layout much earlier than it's currently done, which probably makes it difficult, but I think that should be done anyway, for the sake of const fn and such.

@reem

This comment has been minimized.

reem commented Mar 11, 2016

@comex this doesn't solve the problem of types being generic over Sizeof breaking at monomorphization time vs. declaration time.

@comex

This comment has been minimized.

comex commented Mar 12, 2016

It does, for some definition of "declaration time". The Atomic<T> type would have a bound on Sizeof*, so specifying an inappropriate type would be an error in the trait system; but since size calculation, which is now part of monomorphization, would have to be done earlier, you could say it's just moving the problem. However, it's distinct in two ways:

  • Since Sizeof would be a trait bound, you would have to explicitly specify it in generics structs/enums containing the Atomic. You couldn't just have an opaque library type Foo<T> with no bounds which will sneakily fail to compile if T is too big, as with monomorphization-time checks today.
  • In the version I preferred where Sizeof is only derived for types opting in via attribute, library authors can be confident that changing the layout of an exported type won't break clients trying to use atomics on it, because it won't be allowed unless they added the attribute.

*actually, it might have to be something like SizeAtMost. I was thinking of being able to write, in some future Rust with generic integers, something like where T: Sizeof<size>, size <= 8. But I guess being able to put expressions in bounds like that is iffy, and even if you could, without Haskell-style existentially qualified types, you'd have to make size an inferred generic parameter on Atomic, which is ugly... Anyway, those are details.

Edit: Actually, that basically makes no sense at all. So SizeAtMost it is.

@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Mar 12, 2016

@alexcrichton

This is certainly quite interesting to me! This is part of where I feel like our story on other architectures is a little lacking. I feel like we don't have a great idea what's going on here. Do you have some specifics we could dig into though?

Here is a quick survey of atomic support for all existing Clang/LLVM targets:
Key: 0 = doesn't support any atomics, X = supports all atomic operations with <= X bits

  • x86: i386(0), i486(32), i586+(64)
  • x86_64: 128 if cmpxchg16b is supported, 64 otherwise
  • ARM: ARMv5 & older(0), ARMv6(32, ARM mode only), ARMv6K(64, ARM mode only), ARMv6T2 (64, ARM & Thumb modes)
  • ARM-M (microcontroller profile, Thumb only): ARMv6-M(0), ARMv7-M(32)
  • AArch64: 128
  • SPARC: SPARCv8(0), SPARCv9(64)
  • PowerPC: PPC32(32), PPC64(64)
  • SystemZ: 64
  • MIPS: MIPS32(32), MIPS64(64)
  • LE64: 64
  • WebAssembly: 32-bit(32), 64-bit(64)
  • NVPTX, AMDGPU, Hexagon, MSP430, TCE, PNaCl, SPIR, XCore, AVR: no atomic support

I'd specifically be thinking of having architecture-specific modules (similar to std::os::{macos, linux, windows, ...} to have all the concrete types:

Note from the list above that there is quite a bit of variation even within a single architecture, so it doesn't really make sense to have a separate set of atomic operations per architecture.

On one hand the ship has sailed here (the Atomic{U,I}size types are stable), but it's important to keep in mind!

Not necessarily. We could argue that Atomic{U,I}size are stable for all currently supported architectures (which all support atomics), but these types may not be available for other (mostly embedded) architectures. We can state that these architectures are only usable with libcore and that libstd requires support for pointer-sized atomic operations.

This is similar to the proposal for float-free libcore which involves removing f32 and f64 from libcore, and only supporting such a configuration with no_std.

I'm all for providing primitives, but having a heavyweight runtime implementation with global mutexes or out of bounds reads to me sounds like a library implementation detail, not something exported as an atomic primitive (as those implementations indeed aren't primitive).

While I agree that a mutex-based implementation may be an issue, the code to use out-of-bounds reads to simulate smaller atomic sizes that natively support is implemented directly in LLVM backends. LLVM will generate the necessary masking and compare-exchange loop inline, without the need to call into compiler-rt.

Do platforms we support not have atomic pointer-sized operations?

All of the platforms we support have pointer-sized atomic operations. The oldest ARM platform that is supported is arm-unknown-linux-gnueabi, which uses ARMv6 (in ARM mode) and thus supports pointer-sized atomic operations. The lack of atomics is only an issue if a custom target spec is used, which is not an "officially" supported target.

What platforms fall back to compiler-rt for support of atomic operations?

The decision to fall back to compiler-rt is done in the LLVM backend based on whether the target architecture has native support for atomics of the required size. If a native operation is not available then it will automatically fall back to a call to compiler-rt.

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Mar 14, 2016

@Amanieu thanks for the impressive investigation! I think your conclusion about avoiding architecture-specific modules is correct, it seems like that wouldn't benefit much here as there's lots of variance withing the architecture itself.

I wonder, however, if we could possibly use the same principle still, though? Perhaps:

std::sync::atomic::{AtomicUsize, AtomicIsize};
std::sync::atomic::b8::AtomicU8;
std::sync::atomic::b16::AtomicU16;
std::sync::atomic::b32::AtomicU32;
std::sync::atomic::b64::AtomicU64;
std::sync::atomic::b128::AtomicU128;

These modules could all be conditionally defined based on the target_has_atomic #[cfg] proposed in this RFC, and they could correctly take into account various architecture differences like ARMv5 and 128 bit atomics on x86_64.

At that point it boils down to when we define target_has_atomic, and I'd personally be comfortable defining it for any operation that doesn't link to compiler-rt (inline compare-exchange loops seem fine to me).

We in theory may not support platforms that don't have any atomics at all quite that well, but it's sorta the same story for floats? Both the core/std variants are stable, so we unfortunately can't move either easily :(

@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Mar 14, 2016

Actually I would prefer if we also gated the Atomic{I,U}size and AtomicPtr types behind #[cfg(target_has_atomic = "ptr")]. This would not be a breaking change since all currently supported platforms would still have these types available. This would only affect custom targets, which currently fail to link due to missing compiler-rt symbols anyways.

To keep things consistent the new AtomicU* types should also be placed directly in std::sync::atomic, but the type themselves would be gated using #[cfg]. There isn't much point in using a module containing a single type when we can just gate the type itself.

I'll write up a new RFC for this soon.

@Amanieu

This comment has been minimized.

Contributor

Amanieu commented Mar 14, 2016

@alexcrichton I've created a new RFC: #1543

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Mar 15, 2016

Thanks @Amanieu! I'll continue discussion of this proposal over there.

@alexcrichton alexcrichton self-assigned this Mar 16, 2016

@jFransham

This comment has been minimized.

jFransham commented Mar 18, 2016

With regard to monomorphisation, what's wrong with

unsafe trait HardwareSupportedAtomic {}

#[cfg(target_has_atomic = "8")]
unsafe impl HardwareSupportedAtomic for u8 {}
#[cfg(target_has_atomic = "8")]
unsafe impl HardwareSupportedAtomic for i8 {}

struct Atomic<T: HardwareSupportedAtomic>;

It's not as general-case as a sizeof-based solution but it at least allows crates that need it to opt-in on a case-by-case basis.

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Mar 31, 2016

🔔 This RFC is now entering its week-long final comment period 🔔


My own personal opinion is that I'd rather take the strategy proposed in #1543 over this one, but the atomic-rs library seems quite promising to fill this gap in the meantime!

@jmesmon

This comment has been minimized.

jmesmon commented Mar 31, 2016

Yep, without the type system growing an understanding of the size of a type, I'm not sure we can do better than what atomic-rs provides now. Should probably just keep in mind that this is a use case where a const size_of and type-level integers would be useful.

@aturon

This comment has been minimized.

Member

aturon commented Apr 6, 2016

I'm very late to this party, but I want to first of all thank @Amanieu for the excellent work on this RFC as well as the alternative.

This RFC helped me better understand the motivation behind Atomic<T> -- I always thought it strange to want to write generic code over the T, but that's actually not the main point. The motivation more seems to be that you can use your own distinct types that may happen to be e.g. word or double-word sized. In other words, you are working with a single concrete type, like Atomic<MyType>, where perhaps MyType is a newtyped usize that rules out certain values, or whatever.

(Of course, Atomic<usize> is also more elegant than AtomicUsize, but that's a secondary concern at best.)

That said, due to the various questions about how to handle fallback (or whether to generate errors at trans time), amongst other things, I feel like it's best to provide this kind of support outside of std for the time being, and focus instead on bolstering the basic integer atomics we provide for now. We can always add Atomic<T> later, and no matter when we add it, deprecations would likely be involved.

@aturon

This comment has been minimized.

Member

aturon commented Apr 6, 2016

@alexcrichton

This comment has been minimized.

Member

alexcrichton commented Apr 14, 2016

The libs team discussed this RFC during triage yesterday and the decision was to close this in favor of merging #1543 instead, thanks regardless for the RFC @Amanieu!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment