Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core::pipes::try_recv_ missing an acquire barrier #7021

Closed
talchas opened this issue Jun 9, 2013 · 6 comments
Closed

core::pipes::try_recv_ missing an acquire barrier #7021

talchas opened this issue Jun 9, 2013 · 6 comments
Labels
A-concurrency Area: Concurrency related issues.

Comments

@talchas
Copy link

talchas commented Jun 9, 2013

fn try_recv_<T:Owned>(p: &mut Packet<T>) -> Option<T> {
    // optimistic path
    match p.header.state {
      Full => {
        let payload = replace(&mut p.payload, None);

The read of p.header.state needs to be an acquire barrier probably via atomic_load_acq to ensure it comes before the read of p.payload, matching with the release by swap_state_rel in send.

@bblum
Copy link
Contributor

bblum commented Jun 11, 2013

This is relevant on architectures with out-of-order memory semantics (ARM, not x86). @eholk and I designed this fastpath before the ARM port came around.

Nominating for production-ready milestone.

@bblum
Copy link
Contributor

bblum commented Jun 11, 2013

@talchas points out that it might also be broken on x86 if LLVM decides to reorder the instruction. So a proper fix (i.e. one that maintains the fastpath performance) would have to use custom LLVM intrinsics for the enum values (analogous to unstable::atomics::AtomicOption, i guess?).

More clearly stated, the goal is to get LLVM to (a) on x86, emit just a "mov" but refuse to reorder it, and (b) on ARM, to emit an acquire barrier after the move.

@huonw
Copy link
Member

huonw commented Jul 30, 2013

Visiting for triage, appears to still be a problem (possibly WONTFIX because newrt will replace this code?).

@bblum
Copy link
Contributor

bblum commented Jul 30, 2013

yeah there should maybe be a metabug for the list of stuff we can close once all the old rt code is gone

@bblum
Copy link
Contributor

bblum commented Aug 1, 2013

As of #8008 we now have the corresponding acquire barrier in the new runtime pipes. (Old runtime pipes still an issue.)

bors added a commit that referenced this issue Aug 1, 2013
The pipes compiler produced data types that encoded efficient and safe
bounded message passing protocols between two endpoints. It was also
capable of producing unbounded protocols.

It was useful research but was arguably done before its proper time.

I am removing it for the following reasons:

* In practice we used it only for producing the `oneshot` protcol  and
  the unbounded `stream` protocol and all communication in Rust use those.
* The interface between the proto! macro and the standard library
  has a large surface area and was difficult to maintain through
  language and library changes.
* It is now written in an old dialect of Rust and generates code
  which would likely be considered non-idiomatic.
* Both the compiler and the runtime are difficult to understand,
  and likewise the relationship between the generated code and
  the library is hard to understand. Debugging is difficult.
* The new scheduler implements `stream` and `oneshot` by hand
  in a way that will be significantly easier to maintain.

This shouldn't be taken as an indication that 'channel protocols'
for Rust are not worth pursuing again in the future.

Concerned parties may include: @graydon, @pcwalton, @eholk, @bblum

The most likely candidates for closing are #7666, #3018, #3020, #7021, #7667, #7303, #3658, #3295.
@thestinger
Copy link
Contributor

Removed by #8170, the bug tracking a rewrite is #7668.

flip1995 pushed a commit to flip1995/rust that referenced this issue Apr 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-concurrency Area: Concurrency related issues.
Projects
None yet
Development

No branches or pull requests

4 participants