Skip to content

Conversation

0xllx0
Copy link

@0xllx0 0xllx0 commented Oct 3, 2025

Adds a loom permutation test for mpmmc::Queue::enqueue contention across threads.

Related: #583
Co-authored-by: NODA Kai @nodakai
Co-authored-by: Sosthène Guédon @sosthene-nitrokey

Comment on lines 478 to 545
fn issue_583_enqueue_loom() {
const N: usize = 4;

loom::model(|| {
let q0 = loom::sync::Arc::new(Queue::<u8, N>::new());
for i in 0..N {
q0.enqueue(i as u8).expect("new enqueue");
}
eprintln!("start!");

let q1 = q0.clone();
loom::thread::spawn(move || {
for k in 0..1000_000 {
if let Some(v) = q0.dequeue() {
q0.enqueue(v)
.unwrap_or_else(|v| panic!("{k}: q0 -> q0: {v}, {:?}", to_vec(&*q0)));
}
}
});

loom::thread::spawn(move || {
for k in 0..1000_000 {
if let Some(v) = q1.dequeue() {
q1.enqueue(v)
.unwrap_or_else(|v| panic!("{k}: q0 -> q0: {v}, {:?}", to_vec(&*q1)));
}
}
});
});
}
Copy link
Author

@0xllx0 0xllx0 Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loom test passes, but is also not a direct replication of the failing test harnesses. There is currently no loom::thread::scope equivalent of std::thread::scope.

Edit: there was some work to add loom::thread::scope in loom/!312, but the author stopped working on it in May 2024.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interestingly, adding a test using loom::thread::scope from the linked PR also passes.

@sgued
Copy link
Contributor

sgued commented Oct 5, 2025

Thanks for working on fixing #583

Please add loom first and change the implementation in a separate PR, this will make it much easier to make sur that the tests are working as intended, and will make review much easier.

Regarding why the loom tests pass: you're misusing loom. You need to add a #[cfg(not(loom))] to all use core::sync::atomic* and a [cfg(loom)] that instead imports the atomics from loom. You then need to pass a --cfg loom flag to the compiler when you run the loom tests (which should be #[cfg(loom)] too)

Here loom doesn't see any of the atomic operation so assumes all went fine.

Also please use stable loom . Using Box::leak or Arc is fine here.

@0xllx0
Copy link
Author

0xllx0 commented Oct 6, 2025

Thanks for working on fixing #583

You're welcome <3

Please add loom first and change the implementation in a separate PR, this will make it much easier to make sur that the tests are working as intended, and will make review much easier.

Sounds good, I just wanted to add the alternate implementation here first to make the comparison easier.

I'll create a separate PR for the alternate implementation.

I'll address the loom tests below.

Regarding why the loom tests pass: you're misusing loom. You need to add a #[cfg(not(loom))] to all use core::sync::atomic* and a [cfg(loom)] that instead imports the atomics from loom. You then need to pass a --cfg loom flag to the compiler when you run the loom tests (which should be #[cfg(loom)] too)

Here loom doesn't see any of the atomic operation so assumes all went fine.

Also please use stable loom . Using Box::leak or Arc is fine here.

This definitely sounds like it requires a separate PR. I'm new to the loom library, so not surprising that I'm using it wrong.

Since the failing conditions can be reached by normal unit-tests, I'll probably wait to implement the loom tests until after we decide on the alternate implementation PR.

Edit: after a small experiment using a feature-gate for use loom::sync::atomic in mpmc, it will require some ugly feature-gates to make the queue constructors non-const for loom builds. May end up worthwhile to support permutation tests.

The loom changes for the current mpmc::Queue are not so bad, but running the tests give the error:

thread 'mpmc::tests::issue_583_enqueue_loom' panicked at /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/loom-0.7.2/src/rt/path.rs:247:13:
Model exceeded maximum number of branches. This is often caused by an algorithm requiring the processor to make progress, e.g. spin locks.

This probably means the loom tests won't tell us anything, since both implementations rely on versions of spin-locks. I'm not going to go through the refactoring effort of making loom primitives work with the crossbeam implementation if it's throwing these kind of errors with the current mpmc implementation. The refactors for the crossbeam implementation are much more involved.

@zeenix
Copy link
Contributor

zeenix commented Oct 6, 2025

Also please use stable loom . Using Box::leak or Arc is fine here.

This definitely sounds like it requires a separate PR

Why? If it's just changes to unmerged changes, then it can (and should be) the same PR. Otherwise, it only creates unnecessary noise. Just modify your commits (if you have to) and force push to your branch.

@0xllx0
Copy link
Author

0xllx0 commented Oct 6, 2025

Also please use stable loom . Using Box::leak or Arc is fine here.

This definitely sounds like it requires a separate PR

Why? If it's just changes to unmerged changes, then it can (and should be) the same PR. Otherwise, it only creates unnecessary noise. Just modify your commits (if you have to) and force push to your branch.

What I meant was that the loom stuff should be separate from the fix PR, which I've opened in #619. The loom tests fail for the existing implementation, and likely the port of the crossbeam implementation. They fail with an error about spin-locks:

thread 'mpmc::tests::issue_583_enqueue_loom' panicked at /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/loom-0.7.2/src/rt/path.rs:247:13:
Model exceeded maximum number of branches. This is often caused by an algorithm requiring the processor to make progress, e.g. spin locks.

Adds a `loom` permutation test for `mpmmc::Queue::enqueue/dequeue`
contention across threads.

Related: rust-embedded#583
Co-authored-by: NODA Kai <https://github.com/nodakai>
Co-authored-by: Sosthène Guédon <https://github.com/sosthene-nitrokey>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants