-
Notifications
You must be signed in to change notification settings - Fork 10.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DO NOT MERGE] Atomic experiments #27229
Conversation
@swift-ci test |
1 similar comment
@swift-ci test |
The latest commit replaces let counter: UnsafeAtomicUInt = …
// TIRED
counter.increment(by: 23, ordering: .releasing)
// WIRED
counter.releasing.increment(by: 23) This gets rid of switch statements, and reads pretty well, but it considerably increases API surface area. |
While ordering views look promising, there are some major problems with them:
|
Things are back in motion. The last commit makes four major changes, with an eye towards getting this ready to be pitched. First, I returned to representing memory orderings with regular arguments. However, we now have three separate ordering types based on the nature of the operation: struct AtomicLoadOrdering {
static var relaxed: AtomicLoadOrdering { get }
static var acquiring: AtomicLoadOrdering { get }
}
struct AtomicStoreOrdering {
static var relaxed: AtomicLoadOrdering { get }
static var releasing: AtomicLoadOrdering { get }
}
struct AtomicUpdateOrdering {
static var relaxed: AtomicLoadOrdering { get }
static var acquiring: AtomicLoadOrdering { get }
static var releasing: AtomicLoadOrdering { get }
static var acquiringAndReleasing: AtomicLoadOrdering { get }
} This lets us enforce that each operation can only be called with orderings that it can support, which was one of the two major advantages to ordering views. (The other advantage was not relying on the optimizer's constant folding to get rid of the switch statements in the implementation. Unfortunately that's back now.) Second, the four high-level atomic structures now include not only an unsafe pointer to the memory location that stores their value, but also an anchor reference to an object that keeps it alive. This makes them entirely safe, so they lose the Note that using these names now may turn out to be unfortunate when and if we get non-copyable types, which would potentially provide a better implementation. Third, we introduced an protocol Anchored {
associatedtype Value
static var defaultInitialValue: Value { get }
init(at address: UnsafeMutablePointer<Value>, in anchor: AnyObject)
}
@propertyWrapper
struct Anchoring<Thing: Anchored> {
var _storage: Thing.Value
init() {
self._storage = Thing.defaultInitialValue
}
init(_ value: Thing.Value) {
self._storage = value
}
static subscript<Anchor: AnyObject>(
_enclosingInstance anchor: Anchor,
wrapped wrappedKeyPath: ReferenceWritableKeyPath<Anchor, Thing>,
storage storageKeyPath: ReferenceWritableKeyPath<Anchor, Self>
) -> Thing {
_read {
let keyPath = storageKeyPath.appending(path: \._storage)
let p = keyPath._directAddress(in: anchor)!
yield Thing(at: p, in: anchor)
}
}
}
struct AtomicInt: Anchored {...}
struct AtomicUInt: Anchored {...}
struct AtomicUnsafeMutablePointer<Pointee>: Anchored {...}
struct AtomicUnmanaged<Instance: AnyObject>: Anchored {...}
struct UnfairLock: Anchored {
struct Value { var lock: os_unfair_lock_s }
let _anchor: AnyObject
let _ptr: UnsafeMutablePointer<Value>
var defaultInitialValue: Value { Value(lock: .init()) }
init(at address: UnsafeMutablePointer<Value>, in anchor: AnyObject) {
_anchor = anchor
_ptr = address
}
func lock() { os_unfair_lock_lock(_ptr) }
func unlock() { os_unfair_lock_unlock(_ptr) }
} The point of all this is that use sites become a lot more pleasant to read and write: class Foo {
@Anchoring(42) var counter: AtomicInt
@Anchoring var lock: UnfairLock
}
func doSomething(foo: Foo) {
foo.counter.wrappingIncrement()
foo.lock.lock()
defer { foo.lock.unlock() }
print("I'm holding the lock right now")
} Unfortunately, currently this has terrible performance, and it needs some compiler work to make it practical.
Finally, I added some convenience operations to increment/decrement atomic integers. extension Atomic[U]Int {
func wrappingIncrement(
by delta: [U]Int = 1,
ordering: AtomicUpdateOrdering = .acquiringAndReleasing
) {...}
func wrappingDecrement(
by delta: [U]Int = 1,
ordering: AtomicUpdateOrdering = .acquiringAndReleasing
) {...}
} The intent here is that |
Add FieldAccessor, a construct that enables pointer operations on stored properties inside class instances. The idea is to initialize a field accessor struct once for each atomic stored property. Once initialized, the field updater provides efficient atomic operations. (It simply caches the offset of the stored property within the class, then uses it to set a direct pointer to the stored property.) This works okay, but it’s quite boilerplatey. Also, leaving the raw non-atomic stored properties visible is a bad idea. final class Foo { private var _v: Int = 0 private var _w: Int = 0 private static let _vField = FieldAccessor<Foo>(for: \Foo._v) private static let _wField = FieldAccessor<Foo>(for: \Foo._w) } func test(_ foo: Foo) { Foo._vField.withUnsafeMutablePointer(in: foo) { ptr in ... } }
UnsafeAtomicInt UnsafeAtomicUInt UnsafeAtomicBool UnsafeAtomicUnmanaged UnsafeAtomicUnsafeMutablePointer UnsafeAtomicInitializableReference
- Use UInt for Word-based atomics - Clarify atomic add operation by calling it “wrapping add” - Remove “atomic” prefix from members of UnsafeAtomic* types - Re-word AtomicMemoryOrdering docs
Convert the AtomicMemoryOrdering enum to a frozen struct with transparent static properties for the old cases. This enables DCE to kick in during debug builds, allowing these switch statements to compile down to the specific case.
It is attractive, but it is too expensive for practical use, and may not be supported on all architectures. This leaves us with the following levels: - relaxed - acquiring (default for loads) - releasing (default for stores) - acquiring and releasing (default for read-modify-write operations)
It’s not layout-compatible with an actual Bool type, so this formulation is obviously broken. (D’oh.) We could switch to single byte atomics, but it doesn’t seem worth the complexity. (Also, it’s unclear to me if that would have unusual alignment expectations or if it would add interference issues.)
…s rather than using an enum Instead of let counter: UnsafeAtomicUInt = … counter.increment(by: 23, ordering: .releasing) we now have this: counter.releasing.increment(by: 23) This gets rid of switch statements, and reads pretty well, but it considerably increases API surface.
… to safe atomics with Anchored
Let’s try keeping things explicit. Requiring an explicit ordering may either be an overall readability improvement, or it may be too much noise. We’ll see with practice.
WrappingAdd(_:) → WrappingIncrement(by:) WrappingSubtract(_:) → WrappingDecrement(by:) Always name the ordering on UnsafeMutableRawPointer APIs
@MadCoder This is terrific feedback, thanks. Reverting the removal of I'll try replacing func compareAndStore(expected: Int, desired: Int, ordering: AtomicUpdateOrdering) -> Bool
func compareAndExchange(expected: Int, desired: Int, ordering: AtomicUpdateOrdering) -> (swapped: Bool, original: Int) (The names aren't great; I expect we'll have ample opportunity to come up with a better naming scheme on the forums.) Interestingly, I originally went with the inout version specifically because it felt less annoying in toy examples, despite the obvious inconsistency with extension UnsafeAtomicInt {
// BEFORE
func myIncrement(ordering: AtomicUpdateOrdering) {
var expected = load(ordering: .relaxed)
while !compareExchange(
expected: &expected,
desired: expected + 1,
ordering: ordering) { }
}
// AFTER
func myIncrement(ordering: AtomicUpdateOrdering) {
var done = false
var value = load(ordering: .relaxed)
while !done {
(done, value) = compareAndExchange(
expected: value,
desired: value + 1,
ordering: ordering)
}
}
} |
I'd nitpick the label for the returned |
I'm unsure of the appropriateness of having the 'load' operation on the Unmanaged atomic wrapper. This would lead to people reproducing the pre-Swift-3 weak reference bug where one thread lowers the reference count to zero while another thread is trying to increment it. Anything with a refcount of 1 shouldn't be interacted with from more than one thread at a time, and the way to encourage that with an atomic unmanaged is to restrict the operations to exchanges. |
@glessard The reason I don't think this is as big as a problem as it first appears is that extracting a strong reference from an let _ref: UnsafeAtomicUnmanaged<Foo> = ...
let ref = _ref.load(ordering: .acquiring).takeUnretainedValue() To me it seems obvious that the two steps aren't going to happen in a single atomic transaction. Now of course, the My view is that the initial wave of atomics will be mostly about establishing the basics -- memory orderings and a naming scheme. A subsequent second wave will flesh out the feature by introducing actually useful atomic types (stamped pointers, atomic strong references, etc.) built around double-wide atomics. The reason I think we need a delay here is that I expect the double-wide atomic types won't maintain a direct match between the logical value held by the atomic struct (say, a strong reference) and the in-memory representation (say, some encoding of a |
(Not to mention that it isn't clear to me exactly what set of double-wide atomic types we would need to add. I also don't know yet if a universal atomic strong reference would scale well enough to be more practical than a regular strong reference + an unfair lock.) |
# Conflicts: # stdlib/public/core/AtomicInt.swift.gyb
These aren’t implemented yet.
Introduce a new UnsafeAtomic<Value> generic struct; use it to define atomic operations on Int8, Int16, Int32, Int64, UInt8, UInt16, UInt32, UInt64 in additiont to the existing Int/UInt. Replace UnsafeAtomic[U]Int with UnsafeAtomic<[U]Int>.
Closing -- these experiments have culminated in the release of the Atomics package, at least for now. |
This PR explores some possible API design variations for exposing a limited set of atomic operations in the stdlib.
We need a stable address to perform atomic operations, so we won't be able to add the obvious atomic types (e.g. a safe
AtomicInt
) until we introduce support for move-only types in the language. However, we should still allow people to experiment with atomics by providing a limited set of atomic operations through unsafe pointer types. To make this more practical, we should investigate ways to reliably get at the address of a stored property of an object instance.Add
FieldAccessor
, a key-path based library construct to reliably get pointers to stored properties of a reference type.Expose some (underscored) builtin atomic operations on UnsafeRawMutablePointer. (These all assume the pointer's alignment is suitable for such operations, and they all work with Int-sized values.) The memory ordering is baked into the method names, which makes these back-deployable to any version to the stdlib. Each method wraps a single
Builtin
atomic primitive operation.Introduce a new
MemoryOrdering
struct, representing LLVM's memory ordering levels, and add some convenience methods that take them. The struct works like a non-frozen enum with the additional restriction that representation of the cases won't ever change. (Which allows better debug performance.)Unfortunately
MemoryOrdering
is a new type, so it isn't directly back-deployable; however, code reads a lot better with the levels as arguments, so it makes sense to use it in higher level APIs, which will come with availability declarations anyway.Introduce a level of type safety and a tiny bit of abstraction by adding a handful of memory-unsafe atomic types. These have the same memory management concerns as an
UnsafeMutablePointer
; in fact they are simply wrappers around a raw pointer type.Other variations are possible, too.