Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upExpose core::intrinsics::volatile_copy_nonoverlapping_memory as core::ptr::volatile_copy_nonoverlapping #58041
Comments
jonas-schievink
added
T-libs
C-feature-request
labels
Feb 1, 2019
This comment has been minimized.
This comment has been minimized.
|
Small libs changes often don't need RFCs, so this is probably fine. Can you elaborate on why nonoverlapping is useful to you in volatile? Much of its advantage over regular copy is in optimizations, many of which are turned off for volatile... |
This comment has been minimized.
This comment has been minimized.
|
I have a library which is used to safely operate on memory which is shared with another, untrusted process. We have to assume that the foreign process may be arbitrarily modifying the memory while we're operating on it, making normal memory accesses unsound. Volatile allows us to ensure that the compiler doesn't make any unsound optimizations with respect to reads or writes from/to the memory. More details here. |
This comment has been minimized.
This comment has been minimized.
|
I'm not a fan of |
This comment has been minimized.
This comment has been minimized.
|
Oh that's a very good point. However, doing volatile ops in a loop can be TERRIBLE for performance because the compiler isn't allowed to coalesce them. Do you know if there are variable-length equivalents of |
This comment has been minimized.
This comment has been minimized.
|
@joshlf Sorry, I was unclear: I meant why wrapping the Also, #58041 (comment) reminds me of a URLO conversation about sharing memory:
|
Centril
added
the
T-lang
label
Feb 2, 2019
This comment has been minimized.
This comment has been minimized.
All of our uses are nonoverlapping. I'm not sure whether it actually provides any optimization opportunities, but I'm sure it can't hurt :)
Our use case is different - we're sharing memory with a remote process which is assumed to be malicious, and so we must assume that the memory is changing out from under us. Thus, accesses have to either be volatile or atomic. If they're not, it's equivalent to a data race, which is UB. |
This comment has been minimized.
This comment has been minimized.
|
If I understand your situation correctly, atomic would not help you here. You'd need volatile. |
This comment has been minimized.
This comment has been minimized.
I don't believe that's right; so long as you use atomic operations, the compiler guarantees that concurrent modifications by a different thread will not cause UB. I can't imagine how that other thread being in a different process would change anything. |
This comment has been minimized.
This comment has been minimized.
|
atomics can't be re-ordered, but as i understand it they can be elided according to LLVM rules:
|
This comment has been minimized.
This comment has been minimized.
|
Atomics being reordered is fine; this is about a security rather than a correctness property (if the remote process is misbehaving, correctness is already out the window). In fact, even reordering would be fine for us. What we really care about is that the compiler doesn't make any invalid assumptions about the stability of memory when performing optimizations. E.g., consider the following simple program: if *len < MAX_LEN {
// do stuff
}
if *len < MAX_LEN {
// do more stuff
}Assuming that |
This comment has been minimized.
This comment has been minimized.
|
Yes. Exactly. What you are describing is what volatile can handle and what atomic cannot. Please read the above llvm example again. Repeated atomic operations can be elided by the compiler if it chooses to. That said, unsynchronized volatile modification is also UB (data race), so basically you're trying to tackle a patch of thorns that's also on fire. |
This comment has been minimized.
This comment has been minimized.
The sorts of elisions that you cite (such as eliding back-to-back sequential stores) affect neither security nor correctness. They don't affect security since all we're trying to avoid is UB, and LLVM says it's not UB to use atomic operations on memory which is being concurrently modified. They don't affect correctness because, in the case of eliding back-to-back sequential stores, a) the current thread can't observe the value of the first store and, b) other threads cannot rely on observing the value of the first store before it's overwritten by the second, and so optimizing to only use the second store does not violate any guarantees.
Ah, I didn't know that. I'll have to reconsider. |
joshlf commentedFeb 1, 2019
I propose that we expose
core::intrinsics::volatile_copy_nonoverlapping_memoryas the stablecore::ptr::volatile_copy_nonoverlapping. It might make sense to stabilize other intrinsics while we're at it; I only mention this one in particular because I have a use case for it.Is a discussion here sufficient, or should I submit an RFC?