Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document the current recommendation when Rust is used to communicate with a different memory ordering model #476

Open
fbq opened this issue Nov 20, 2023 · 13 comments

Comments

@fbq
Copy link

fbq commented Nov 20, 2023

Follow-ups on the discussion, since:

  • At least in short term, Rust will continue using C++ memory ordering model (i.e. no new memory model)
  • Using two different memory models for communication via shared memory causes issues of verification, reasoning, etc.

it makes sense trying to use the existing memory model instead of Rust own one when using Rust to communicate with a different memory ordering model. In other words, for example when using Rust for Linux kernel development, following the below suggestion makes lots of sense in the short term.

so, just literally translate whatever C does. C doesnt guarantee anything about address dependencies or whatever. Rust doesnt, either. It's not getting worse. It's not getting better, either, but this is an area where doing better is really hard and where it can only be done with a global change to the Linux kernel to adopt a new memory model that is then used by both the C and Rust code in the kernel.

(original message from Ralf)

Of course, this means that the project using a different model may have UBs in the Rust code (due to data races defined by C++ memory model), but in practice, with careful usage of data races, these UBs unlikely cause any real issue.

Having this recommendation documented somewhere will help resolve issues like #348, and unblock experiment projects like Rust-for-Linux on which synchronization primitives to use.

@RalfJung
Copy link
Member

Note that our discussion was specific to the LKMM, which is the bay far best-explored alternative memory model.

In general, sharing memory with programs that use a different memory model is simply not possible.

@RalfJung
Copy link
Member

unblock experiment projects like Rust-for-Linux on which synchronization primitives to use.

I don't see what these experiments are currently blocked on?

@digama0
Copy link

digama0 commented Nov 20, 2023

I am quite dubious regarding "Just do the same things as you would in C and your UBs won't be miscompiled in Rust either", because Rust is a different language and it pokes at LLVM in a rather different way from C, and we have already found plenty of miscompiles in LLVM even in safe code because Rust code uses attributes and control patterns which are rare or impossible in IR lowered from C.

For any formal UB, you have to just run the compiler and find out, or extract some lower level promises from the backend about what kind of optimizations won't be happening such that you can reason that the UB is actually defined at another level. Even then it's pretty difficult to make this work long term as with LKMM unless you control the compiler enough to turn off all optimizations which violate this memory model.

Personally, given a choice between using rust-approved mechanisms like standard atomics and braving the UB caused by cross-lang memory model mismatch, or using LKMM-inspired Rust UB in order to stay consistent with the C code, I would go for the atomics, because LTO is not nearly as comprehensive as actual compilation so your chances of running into violations seems lower. But it's UB either way, so you are kind of on your own - fuzz it thoroughly and pray.

@fbq
Copy link
Author

fbq commented Nov 20, 2023

unblock experiment projects like Rust-for-Linux on which synchronization primitives to use.

I don't see what these experiments are currently blocked on?

There is Rust code communicating with C side, e.g. https://lore.kernel.org/rust-for-linux/ZTHPOfy4dhj0x5ch@boqun-archlinux/, right now we have two (or three) options:

  • Use a Rust atomic: but this is two memory models (C++ memory model and LKMM) talking with each other.
  • Use read_volatile and treat it as READ_ONCE() in C: but this is UB due to data race.
  • Use inline asm, but we need to do it for all the architectures that Linux supports, that's a lot of (honestly, unnecessary) work.

Without a clear recommendation, I wouldn't proceed in a way that's either considered as "wrong" or a lot of work.

@fbq
Copy link
Author

fbq commented Nov 20, 2023

Personally, given a choice between using rust-approved mechanisms like standard atomics and braving the UB caused by cross-lang memory model mismatch, or using LKMM-inspired Rust UB in order to stay consistent with the C code, I would go for the atomics...

Case in point, seems Mario would like option 1 ;-)

I don't have any hard preference, all I want is a consensus we can move one. (if possible, hopefully less work ;-)).

@RalfJung
Copy link
Member

RalfJung commented Nov 20, 2023

Personally, given a choice between using rust-approved mechanisms like standard atomics and braving the UB caused by cross-lang memory model mismatch, or using LKMM-inspired Rust UB in order to stay consistent with the C code, I would go for the atomics, because LTO is not nearly as comprehensive as actual compilation so your chances of running into violations seems lower. But it's UB either way, so you are kind of on your own - fuzz it thoroughly and pray.

That's a bad idea IMO. Atomics have an ABI, it is completely possible to have multiple different implementations of atomics that are internally working fine but stop working when being combined with each other.

When linking with other code, it is important to use the same ABI as that code.

So in this case I think it is important to use the same core operations as the C code, to make sure both sides are compatible when it comes to atomics.

Having two different memory models linked together is way harder to evaluate properly than ensuring we generate the same (or sufficiently similar) LLVM IR as clang for these operations.

I am quite dubious regarding "Just do the same things as you would in C and your UBs won't be miscompiled in Rust either", because Rust is a different language and it pokes at LLVM in a rather different way from C, and we have already found plenty of miscompiles in LLVM even in safe code because Rust code uses attributes and control patterns which are rare or impossible in IR lowered from C.

I don't see how this argument applies here. We are talking about volatile and atomic operations specifically, not some more general idea of "arbitrary C code and its 'equivalent' Rust code".

@RalfJung
Copy link
Member

RalfJung commented Nov 20, 2023

In particular, LLVM will optimize its atomics, much more than it optimizes volatile. That makes it in my eyes extremely risky to use LLVM atomics to implement the LKMM -- these optimizations have never been checked for correctness with the LKMM.

Using volatile (plus empty asm blocks as fences), like in C, is a much less risky strategy.

@digama0
Copy link

digama0 commented Nov 20, 2023

I think this discussion would be improved by having a specific proposal for what patterns are being considered and what the suggested rendering of this is in Rust and LLVM IR, so we can see what the compiler actually does in the case(s) of interest. It is very difficult to make blanket statements when we're talking about things that we are in agreement is formally UB, meaning that the exact details of the situation (compiler + code) matter a lot.

@chorman0773
Copy link
Contributor

One thing that might also be a considered Java's preview for java.lang.foreign allows accessing native memory directly from Java (among many other things that weren't previously supported by JNI), and using https://docs.oracle.com/en/java/javase/20/docs/api/java.base/java/lang/foreign/MemoryLayout.html#varHandle(java.lang.foreign.MemoryLayout.PathElement...) along with methods on the VarHandle that returns, it's possible to do atomic operations of various levels, which are subtly different from C/++ model operations. We should definitely have a way to model foreign memory models in FFI calls.

@fbq
Copy link
Author

fbq commented Nov 23, 2023

I think this discussion would be improved by having a specific proposal for what patterns are being considered and what the suggested rendering of this is in Rust and LLVM IR, so we can see what the compiler actually does in the case(s) of interest.

I don't disagree, actually my original plan was once we have a rough direction, I would start working on things called "litmus tests" to demonstrate the patterns that Linux kernel may use and expect to work. We have a lot of these in C (here and here). The syntax of a litmus test and its usage are explained here.

And I'm currently working on a few Rust litmus tests at my lkmm-for-rust repo, we can always start the discussion from these ;-) Of course, help on translating litmus tests from C to Rust is always welcomed.

However, before we dive into that, let's take one step back. First, instead of making statements about "what to do", can we at least first make statements about "what not to do"? As I mentioned in the description, what I got from the discussion in the zulip thread, and previous discussions is:

  • At least in short term, Rust will continue using C++ memory ordering model (i.e. no new memory model).
  • Using two different memory models for communication via shared memory causes issues of verification, reasoning, etc. In other words, not recommended.

If this is the position of the whole Rust team right now, maybe we can make it more clear? That will at least prevent people from going into a wrong direction. And this is mostly the purpose of this issue. Of course, I could totally misunderstand people, feel free to improve the statements about "what not to do" when Rust memory model communicates with another memory model.

Now, as for "a specific proposal for what patterns are being considered", that sounds to me exactly a Linux kernel memory model in Rust. Because the previous experience tells us a set of a finite number of patterns doesn't work: in Paul's litmus test repo, there are ~1000 C litmus tests, and each describes a pattern that Linux kernel might use. Unless you want a proposal containing 1000 patterns ;-) So we need a tool to be able to tell the expect outcomes from a piece of code to both developers and compiler writers and that's a memory ordering model (maybe not a formal one though).

It's something we (Rust-for-Linux developers) eventually want, and I'm working on it, but it still a memory model, so it will take some time. In the meanwhile, we can always have some discussion on a particular litmus test if that works for everyone.

It is very difficult to make blanket statements when we're talking about things that we are in agreement is formally UB, meaning that the exact details of the situation (compiler + code) matter a lot.

@RalfJung
Copy link
Member

RalfJung commented Nov 24, 2023

Using two different memory models for communication via shared memory causes issues of verification, reasoning, etc. In other words, not recommended.

It causes issues of correctness as well. I certainly cannot promise that the way LLVM compiles its atomic will interact correctly with the way LKMM implements its atomics. Implementing atomics often involves some choices (which fences to use, whether to put fences before or after the memory access), and those choices need to be made consistently or the program will not behave correctly. We don't make any guarantees about which choices Rust makes there, just that they are internally consistent. If LLVM puts the fence before the operation and LKMM puts it after, then 💥

That's why it is important not to mix different implementations of memory models on the same variable, let alone different memory models.

Now, as for "a specific proposal for what patterns are being considered", that sounds to me exactly a Linux kernel memory model in Rust. Because the previous experience tells us a set of a finite number of patterns doesn't work: in Paul's litmus test repo, there are ~1000 C litmus tests, and each describes a pattern that Linux kernel might use. Unless you want a proposal containing 1000 patterns ;-) So we need a tool to be able to tell the expect outcomes from a piece of code to both developers and compiler writers and that's a memory ordering model (maybe not a formal one though).

Doesn't the kernel have a few well-defined macros, READ_ONCE and things like that? I think what would be interesting to see is how you'd want to implement all those macros, which I hope are less than 1000. ;)

@fbq
Copy link
Author

fbq commented Nov 24, 2023

Using two different memory models for communication via shared memory causes issues of verification, reasoning, etc. In other words, not recommended.

It causes issues of correctness as well. I certainly cannot promise that the way LLVM compiles its atomic will interact correctly with the way LKMM implements its atomics. Implementing atomics often involves some choices (which fences to use, whether to put fences before or after the memory access), and those choices need to be made consistently or the program will not behave correctly. We don't make any guarantees about which choices Rust makes there, just that they are internally consistent. If LLVM puts the fence before the operation and LKMM puts it after, then 💥

That's why it is important not to mix different implementations of memory models on the same variable, let alone different memory models.

Totally agreed. In Rust for Linux kernel, very likely, we are going to:

  • implement atomic read/write via read_volatile and write_volatile.
  • implement barriers and atomic read-modify-write in asm! or FFI, so that they have ABI-level consistency with the primitives from C.
  • core::sync::atomic is still encouraged to use for Rust <-> Rust synchronization.

We will have a few problems (or more accurately questions need to answer by ourselves):

  • dependency ordering: I have the confidence that we can work with t-opsem and rest of Rust community to make simple cases just work(TM) ;-) I can certainly provide the litmus translation related to this, so that we have something concrete to discuss, example
  • Rust side barriers and atomic rmw implementation: it's more an engineering problem on how to maintain a synchronized versions of both Rust and C sides.
  • Ordering between core::sync::atomic and Rust LKMM primitives: I don't have a case yet to figure out whether it's going to be a problem or not, but I guess a simple way would be treating each other as normal memory accesses.

Now, as for "a specific proposal for what patterns are being considered", that sounds to me exactly a Linux kernel memory model in Rust. Because the previous experience tells us a set of a finite number of patterns doesn't work: in Paul's litmus test repo, there are ~1000 C litmus tests, and each describes a pattern that Linux kernel might use. Unless you want a proposal containing 1000 patterns ;-) So we need a tool to be able to tell the expect outcomes from a piece of code to both developers and compiler writers and that's a memory ordering model (maybe not a formal one though).

Doesn't the kernel have a few well-defined macros, READ_ONCE and things like that? I think what would be interesting to see is how you'd want to implement all those macros, which I hope are less than 1000. ;)

Yeah, I have an RFC patch on implementing READ_ONCE and WRITE_ONCE. Noted I post before our zulip conversation, so I wasn't clear about whether I should use read_volatile or atomic.

And yes, they are less than 1000 ;-) They fit in a C/C++ standard paper

@fbq
Copy link
Author

fbq commented Feb 20, 2024

Totally agreed. In Rust for Linux kernel, very likely, we are going to:

  • implement atomic read/write via read_volatile and write_volatile.
  • implement barriers and atomic read-modify-write in asm! or FFI, so that they have ABI-level consistency with the primitives from C.
  • core::sync::atomic is still encouraged to use for Rust <-> Rust synchronization.

Based on a recent discussion, the last one is not going to be the plan, in other words, all Rust code in Linux needs to use LKMM atomics right now, until we have more data point to evaluate other options.

(Of course, I should also open an issue in the Rust-for-Linux repo about the memory model policy, where it's more appropriate, but just put the update here for record purpose).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants