Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor sync::Once #65719

Merged
merged 12 commits into from Nov 10, 2019
Merged

Refactor sync::Once #65719

merged 12 commits into from Nov 10, 2019

Conversation

pitdicker
Copy link
Contributor

@pitdicker pitdicker commented Oct 23, 2019

std::sync::Once contains some tricky code to park and unpark waiting threads. once_cell has very similar code copied from here. I tried to add more comments and refactor the code to make it more readable (at least in my opinion). My PR to once_cell was rejected, because it is an advantage to remain close to the implementation in std, and because I made a mess of the atomic orderings. So now a PR here, with similar changes to std::sync::Once!

The initial goal was to see if there is some way to detect reentrant initialization instead of deadlocking. No luck there yet, but you first have to understand and document the complexities of the existing code 😄.

Maybe not this entire PR will be acceptable, but I hope at least some of the commits can be useful.

Individual commits:

Rename state to state_and_queue

Just a more accurate description, although a bit wordy. It helped me on a first read through the code, where before state was use to encode a pointer in to nodes of a linked list.

Simplify loop conditions in RUNNING and add comments

In the relevant loop there are two things to be careful about:

  • make sure not to enqueue the current thread only while still RUNNING, otherwise we will never be woken up (the status may have changed while trying to enqueue this thread).
  • pick up if another thread just replaced the head of the linked list.

Because the first check was part of the condition of the while loop, the rest of the parking code also had to live in that loop. It took me a while to get the subtlety here, and it should now be clearer.

Also call out that we really have to wait until signaled, otherwise we leave a dangling reference.

Don't mutate waiter nodes

Previously while waking up other threads the managing thread would take() out the Thread struct and use that to unpark the other thread. It is just as easy to clone it, just 24 bytes. This way Waiter.thread does not need an Option, Waiter.next does not need to be a mutable pointer, and there is less data that needs to be synchronised by later atomic operations.

Turn Finish into WaiterQueue

In my opinion these changes make it just a bit more clear what is going on with the thread parking stuff.

Move thread parking to a seperate function

Maybe controversial, but with this last commit all the thread parking stuff has a reasonably clean seperation from the state changes in Once. This is arguably the trickier part of Once, compared to the loop in call_inner. It may make it easier to reuse parts of this code (see rust-lang/rfcs#2788 (comment)). Not sure if that ever becomes a reality though.

Reduce the amount of comments in call_inner

With the changes from the previous commits, the code pretty much speaks for itself, and the amount of comments is hurting readability a bit.

Use more precise atomic orderings

Now the hard one. This is the one change that is not anything but a pure refactor or change of comments.

I have a dislike for using SeqCst everywhere, because it hides what the atomics are supposed to do. the rationale was:

This cold path uses SeqCst consistently because the performance difference really does not matter there, and SeqCst minimizes the chances of something going wrong.

But in my opinion, having the proper orderings and some explanation helps to understand what is going on. My rationale for the used orderings (also included as comment):

When running Once we deal with multiple atomics: Once.state_and_queue and an unknown number of Waiter.signaled.

  • state_and_queue is used (1) as a state flag, (2) for synchronizing the data that is the result of the Once, and (3) for synchronizing Waiter nodes.
    • At the end of the call_inner function we have to make sure the result of the Once is acquired. So every load which can be the only one to load COMPLETED must have at least Acquire ordering, which means all three of them.
    • WaiterQueue::Drop is the only place that may store COMPLETED, and must do so with Release ordering to make result available.
    • wait inserts Waiter nodes as a pointer in state_and_queue, and needs to make the nodes available with Release ordering. The load in its compare_and_swap can be Relaxed because it only has to compare the atomic, not to read other data.
    • WaiterQueue::Drop must see the Waiter nodes, so it must load state_and_queue with Acquire ordering.
    • There is just one store where state_and_queue is used only as a state flag, without having to synchronize data: switching the state from INCOMPLETE to RUNNING in call_inner. This store can be Relaxed, but the read has to be Acquire because of the requirements mentioned above.
  • Waiter.signaled is both used as a flag, and to protect a field with interior mutability in Waiter. Waiter.thread is changed in WaiterQueue::Drop which then sets signaled with Release ordering. After wait loads signaled with Acquire and sees it is true, it needs to see the changes to drop the Waiter struct correctly.
  • There is one place where the two atomics Once.state_and_queue and Waiter.signaled come together, and might be reordered by the compiler or processor. Because both use Aquire ordering such a reordering is not allowed, so no need for SeqCst.

cc @matklad

@rust-highfive
Copy link
Collaborator

r? @KodrAus

(rust_highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Oct 23, 2019
@pitdicker
Copy link
Contributor Author

Thinking a bit more about this, I don't think the commit 'Don't mutate waiter nodes' is the smartest move. Let me work on it a little more before anyone takes a look to review (will probably take me a day).

@pitdicker
Copy link
Contributor Author

Added another commit.

In Waiter use interior mutability and use no raw pointer

The existing code first created a mutable reference to a Waiter struct, turned it into a raw mutable pointer, and passed it to another thread. Then it kept using a reference to the atomic signaled inside that Waiter. If I understand things correctly that is unsound, but this is at the edge of my knowledge.

Two options we can choose from instead:

  • Pass a raw non-mutable pointer to the other thread. In WaiterQueue::drop make a clone from the thread and next fields. This seemed suboptimal to me: cloning thread means cloning and dropping an Arc; and copying next can leave a dangling pointer around in the Waiter struct.
  • Pass a shared reference to the other thread. Use interior mutability, where WaiterQueue::drop swaps out the thread and next fields with None.

This commit goes the route with interior mutability, and minimizing the use of raw pointers.

cc @RalfJung You may be interested in what I think are two cases of unsoundness in Once:

@oliver-giersch
Copy link
Contributor

From what I can see, this is not only a refactoring, but also a fix for UB (aliasing of a mutable reference), very good catch!
I haven't looked through the code in its entirety, but some things stuck out:

  • I don't believe you need interior mutability for the stack waiter's next pointer/reference, since it is only updated during the CAS loop in the failure case, i.e. before the waiter becomes visible to other threads
  • I don't what the RUNNING case is necessary for (in the original implementation as well), any bit pattern that is not 0, 1, 2 or 3 (for the various states) is necessarily a pointer to a waiter, so you could do away with this state and the bitwise and required to extract the pointer
  • memory orderings are notoriously difficult to get correct ofc, but in my estimation the implementation does not require SeqCst at all

I have updated my own implementation of conquer_once to fix the UB and also added some comments explaining my reasoning on the various memory orderings. If you like, you can cross-reference and compare with your own assessments.

@pitdicker
Copy link
Contributor Author

  • I don't believe you need interior mutability for the stack waiter's next pointer/reference, since it is only updated during the CAS loop in the failure case, i.e. before the waiter becomes visible to other threads

Yes, that is possible. I choose to also switch it back to None here when threads get woken up. This way there is no dangling pointer from one woken up thread to another woken thread, and we can use normal references. Seemed just a bit nicer to me, but doesn't really matter.

  • I don't what the RUNNING case is necessary for (in the original implementation as well), any bit pattern that is not 0, 1, 2 or 3 (for the various states) is necessarily a pointer to a waiter, so you could do away with this state and the bitwise and required to extract the pointer

You probably also considered the case where there are no waiting threads and the pointer part is 0. If I understand it right you propose that both RUNNING (0x2) and any pattern not matching 0, 1, 2, or 3 correspond to RUNNING? That could work, but I don't yet see an advantage to warrant the subtlety.

  • memory orderings are notoriously difficult to get correct ofc, but in my estimation the implementation does not require SeqCst at all

Then we mostly agree. What did you think about the argument that state.load may be reordered by the processor with signaled.load? I see it as very implausible, but would want to use SeqCst just be be safe for these two.

@pitdicker
Copy link
Contributor Author

pitdicker commented Oct 24, 2019

Your trick to use #[repr(align(4))] on Waiter in conquer-once is a good idea, that will guarantee we have 2 bits for state, even if the platform might want to align differently.

Also using the names StackWaiter and StackWaiter.ready seems nice.

@oliver-giersch
Copy link
Contributor

oliver-giersch commented Oct 24, 2019

Yes, that is possible. I choose to also switch it back to None here when threads get woken up. This way there is no dangling pointer from one woken up thread to another woken thread, and we can use normal references. Seemed just a bit nicer to me, but doesn't really matter.

I would also argue that using a raw *const or perhaps a NonNull for the next pointer would be the cleaner and also safer way, than to store and dereference to references with arbitrary lifetime. The lifetime does not have any meaning, given that the references point into the stacks of other blocked threads .
Also, if dangling references are really a problem/UB (which I am not necessarily convinced of, yet), then this might also be problematic, I think.

let mut curr: *const Waiter = ...;
while !curr.is_null() {
    let thread = unsafe {
        let next = (*curr).next;
        let thread = (*curr).thread.take().unwrap();
        (*curr).signaled.store(true, Release);
        thread
    }
    thread.unpark();
}

looks cleaner to me, alternatively you could also create a temporary reference inside the unsafe block, of course.

You probably also considered the case where there are no waiting threads and the pointer part is 0. If I understand it right you propose that both RUNNING (0x2) and any pattern not matching 0, 1, 2, or 3 correspond to RUNNING? That could work, but I don't yet see an advantage to warrant the subtlety.

Sorry, I wrote this from memory and have confused some things.
In my implementation, the initial state is 1 and the first accessing thread sets it to 0 (or null) and all further threads interpret it as queue and insert their Waiter pointers. It's a minor thing, but it means you don't need bitwise logic and the STATE_MASK constant for enqueuing or dequeueing waiters.

Then we mostly agree. What did you think about the argument that state.load may be reordered by the processor with signaled.load? I see it as very implausible, but would want to use SeqCst just be be safe for these two.

I believe the memory model does not permit the reordering of acquire loads among themselves, as it would violate the no reads or writes in the current thread can be reordered before this load rule for load that is sequenced first. But I'd have to find a reference for that.

@pitdicker
Copy link
Contributor Author

I would also argue that using a raw *const or perhaps a NonNull for the next pointer would be the cleaner and also safer way, than to store and dereference to references with arbitrary lifetime. The lifetime does not have any meaning, given that the references point into the stacks of other blocked threads.

I am not married to the idea, but wanted to give it a try. Not smart of me, as I also wanted this PR to be mostly uncontroversial. I do believe the lifetimes work out, but will just as easily revert that part if other reviewers feel the same way.

@pitdicker
Copy link
Contributor Author

pitdicker commented Oct 24, 2019

IIRC the memory model does not permit the reordering of non-relaxed atomic operations among themselves, but I'd have to find a reference for that.

I was pointed by @matklad to https://internals.rust-lang.org/t/idea-replace-seqcst-with-seqcst-acq-rel-acqrel/11028/3.

Edit: reading that post again, and the one right after it, I also do believe Acquire should be enough.

@rust-highfive
Copy link
Collaborator

The job x86_64-gnu-llvm-6.0 of your PR failed (pretty log, raw log). Through arcane magic we have determined that the following fragments from the build log may contain information about the problem.

Click to expand the log.
2019-10-24T15:47:08.2751687Z ##[command]git remote add origin https://github.com/rust-lang/rust
2019-10-24T15:47:08.9890615Z ##[command]git config gc.auto 0
2019-10-24T15:47:08.9896207Z ##[command]git config --get-all http.https://github.com/rust-lang/rust.extraheader
2019-10-24T15:47:08.9898624Z ##[command]git config --get-all http.proxy
2019-10-24T15:47:08.9903284Z ##[command]git -c http.extraheader="AUTHORIZATION: basic ***" fetch --force --tags --prune --progress --no-recurse-submodules --depth=2 origin +refs/heads/*:refs/remotes/origin/* +refs/pull/65719/merge:refs/remotes/pull/65719/merge
---
2019-10-24T15:54:29.9707467Z    Compiling panic_abort v0.0.0 (/checkout/src/libpanic_abort)
2019-10-24T15:54:30.1286097Z    Compiling backtrace v0.3.37
2019-10-24T15:54:30.6239718Z    Compiling rustc-std-workspace-alloc v1.99.0 (/checkout/src/tools/rustc-std-workspace-alloc)
2019-10-24T15:54:30.7424519Z    Compiling panic_unwind v0.0.0 (/checkout/src/libpanic_unwind)
2019-10-24T15:54:36.1413315Z error[E0594]: cannot assign to `node.next`, as `node` is not declared as mutable
2019-10-24T15:54:36.1415242Z     |
2019-10-24T15:54:36.1415242Z     |
2019-10-24T15:54:36.1415800Z 438 |     let node = Waiter {
2019-10-24T15:54:36.1416772Z     |         ---- help: consider changing this to be mutable: `mut node`
2019-10-24T15:54:36.1417306Z ...
2019-10-24T15:54:36.1417932Z 456 |         node.next = (old_head_and_status & !STATE_MASK) as *const Waiter;
2019-10-24T15:54:36.1418865Z 
2019-10-24T15:54:36.7169759Z error: aborting due to previous error
2019-10-24T15:54:36.7169852Z 
2019-10-24T15:54:36.7690610Z error: could not compile `std`.
---
2019-10-24T15:54:36.7810513Z   local time: Thu Oct 24 15:54:36 UTC 2019
2019-10-24T15:54:36.9341968Z   network time: Thu, 24 Oct 2019 15:54:36 GMT
2019-10-24T15:54:36.9346727Z == end clock drift check ==
2019-10-24T15:54:39.8247361Z 
2019-10-24T15:54:39.8367738Z ##[error]Bash exited with code '1'.
2019-10-24T15:54:39.8403925Z ##[section]Starting: Checkout
2019-10-24T15:54:39.8406148Z ==============================================================================
2019-10-24T15:54:39.8406268Z Task         : Get sources
2019-10-24T15:54:39.8406319Z Description  : Get sources from a repository. Supports Git, TfsVC, and SVN repositories.

I'm a bot! I can only do what humans tell me to, so if this was not helpful or you have suggestions for improvements, please ping or otherwise contact @TimNN. (Feature Requests)

@pitdicker
Copy link
Contributor Author

@oliver-giersch I have reverted the part about using normal references, now things are back to raw pointers as at the start of this PR. And I have changed the stuff about how two loads might need to be SeqCst.
The commits are rebased, and the first post is partly updated to keep things a bit sane.

@oliver-giersch
Copy link
Contributor

Looks great to me, a rather modest refactoring that fixes at least one clear instance of UB, I'd think this would be a no-brainer if not for the relaxed orderings.
I've looked through them and they match my own assessments in most cases and are only stronger in some, but some others should probably also review these.
One question though, what was your reasoning for going with acquire-release in this instead of release?

@pitdicker
Copy link
Contributor Author

One question though, what was your reasoning for going with acquire-release in this instead of release?

Because it has to acquire the nodes from the waiting threads before walking the queue to wake them up.

@oliver-giersch
Copy link
Contributor

Because it has to acquire the nodes from the waiting threads before walking the queue to wake them up.

Yeah I believe you are correct. It's good to these exercises from time to time.

@JohnCSimon
Copy link
Member

Ping from triage.
@KodrAus can you please review this PR?
cc: @pitdicker @oliver-giersch
Thank you!

@KodrAus
Copy link
Contributor

KodrAus commented Nov 3, 2019

r? @Amanieu

@rust-highfive rust-highfive assigned Amanieu and unassigned KodrAus Nov 3, 2019
@RalfJung
Copy link
Member

RalfJung commented Nov 4, 2019

(moved to #65796)

src/libstd/sync/once.rs Outdated Show resolved Hide resolved
src/libstd/sync/once.rs Outdated Show resolved Hide resolved
@pitdicker
Copy link
Contributor Author

I have moved stuff around so there is no more mutation to node happening inside wait. Would this be more up to scratch?

@RalfJung
Copy link
Member

RalfJung commented Nov 4, 2019

I can't at a glance judge the functional correctness of this, but node isn't mutable any more so there are likely no uniqueness assertions. There should be comments explaining why that's important and why we have Cell.

I'd personally still prefer raw pointers of interior mutability, but well, I won't fight the one who does the actual work here. ;)

@pitdicker
Copy link
Contributor Author

There should be comments explaining why that's important and why we have Cell.

There is something at https://github.com/rust-lang/rust/pull/65719/files#diff-80754b8db8699947d7b2a43a9cc17dedR169, but I'll add more.

I'd personally still prefer raw pointers of interior mutability, but well, I won't fight the one who does the actual work here. ;)

Thank you! I don't mind which one is used, but I have some trouble picturing how the code should like like using raw pointers :-)

@pitdicker
Copy link
Contributor Author

pitdicker commented Nov 4, 2019

There is one more thing I was playing with, but didn't include until now.

For the once_cell crate I was playing with the atomic Consume ordering. For that I wrote a test that catches insufficient synchronization very well when run on my phone. And to make sure there was not some other synchronization happening that hides shortcomings, I wanted to find a way to remove the acquire from the loop that checks node.signaled.

The acquire is currently necessary because the node is read shortly thereafter, when it gets dropped. But if it was able to successfully add itself to the queue, there isn't anything that needs dropping: another thread takes out thread and takes care of dropping that.

Now what if we wrap node inside ManuallyDrop (and are a little careful)? It seems to me we then no longer need to acquire node again after getting unparked, because we never read from it again.

Is that a correct conclusion, that we would not need to acquire data if we don't read it anymore? Even if it lives on our stack etc.? Is it true that using a cell prevents optimistic reads, if those are even a concern?

@RalfJung
Copy link
Member

RalfJung commented Nov 4, 2019

For the once_cell crate Is was playing with the atomic Consume ordering

(Stable) Rust doesn't support "consume", and for a good reason -- nobody knows how to properly specify it. Let's just stick with the sane and well-understood release/acquire orderings.

@pitdicker
Copy link
Contributor Author

I don't want to suggest using it here. But I was curious about if we wouldn't need to do an acquire if we don't do a read from the non-atomic fields of node after getting unparked.

@RalfJung
Copy link
Member

RalfJung commented Nov 4, 2019

That sentence contains too many negations for me to be able to parse it.^^

@pitdicker
Copy link
Contributor Author

Let's forget about it for now. It was something I was wondering about, but not planning for this PR.

I have amended the last commit to keep the comments in wait helpful, with on top in wait:

Note: the following code was carefully written to avoid creating a mutable reference to node that gets aliased.

@Amanieu
Copy link
Member

Amanieu commented Nov 7, 2019

The code LGTM. There's just a few formatting issues that a quick rustfmt can fix.

@pitdicker
Copy link
Contributor Author

@Amanieu Thank you for reviewing. I have to admit this is my first time running rustfmt.

@Amanieu
Copy link
Member

Amanieu commented Nov 9, 2019

@bors r+

@bors
Copy link
Contributor

bors commented Nov 9, 2019

📌 Commit b05e200 has been approved by Amanieu

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Nov 9, 2019
JohnTitor added a commit to JohnTitor/rust that referenced this pull request Nov 10, 2019
…nieu

Refactor sync::Once

`std::sync::Once` contains some tricky code to park and unpark waiting threads. [once_cell](https://github.com/matklad/once_cell) has very similar code copied from here. I tried to add more comments and refactor the code to make it more readable (at least in my opinion). My PR to `once_cell` was rejected, because it is an advantage to remain close to the implementation in std, and because I made a mess of the atomic orderings. So now a PR here, with similar changes to `std::sync::Once`!

The initial goal was to see if there is some way to detect reentrant initialization instead of deadlocking. No luck there yet, but you first have to understand and document the complexities of the existing code :smile:.

*Maybe not this entire PR will be acceptable, but I hope at least some of the commits can be useful.*

Individual commits:

#### Rename state to state_and_queue
Just a more accurate description, although a bit wordy. It helped me on a first read through the code, where before `state` was use to encode a pointer in to nodes of a linked list.

#### Simplify loop conditions in RUNNING and add comments
In the relevant loop there are two things to be careful about:
- make sure not to enqueue the current thread only while still RUNNING, otherwise we will never be woken up (the status may have changed while trying to enqueue this thread).
- pick up if another thread just replaced the head of the linked list.

Because the first check was part of the condition of the while loop, the rest of the parking code also had to live in that loop. It took me a while to get the subtlety here, and it should now be clearer.

Also call out that we really have to wait until signaled, otherwise we leave a dangling reference.

#### Don't mutate waiter nodes
Previously while waking up other threads the managing thread would `take()` out the `Thread` struct and use that to unpark the other thread. It is just as easy to clone it, just 24 bytes. This way `Waiter.thread` does not need an `Option`, `Waiter.next` does not need to be a mutable pointer, and there is less data that needs to be synchronised by later atomic operations.

#### Turn Finish into WaiterQueue
In my opinion these changes make it just a bit more clear what is going on with the thread parking stuff.

#### Move thread parking to a seperate function
Maybe controversial, but with this last commit all the thread parking stuff has a reasonably clean seperation from the state changes in `Once`. This is arguably the trickier part of `Once`, compared to the loop in `call_inner`. It may make it easier to reuse parts of this code (see rust-lang/rfcs#2788 (comment)). Not sure if that ever becomes a reality though.

#### Reduce the amount of comments in call_inner
With the changes from the previous commits, the code pretty much speaks for itself, and the amount of comments is hurting readability a bit.

#### Use more precise atomic orderings
Now the hard one. This is the one change that is not anything but a pure refactor or change of comments.

I have a dislike for using `SeqCst` everywhere, because it hides what the atomics are supposed to do. the rationale was:
> This cold path uses SeqCst consistently because the performance difference really does not matter there, and SeqCst minimizes the chances of something going wrong.

But in my opinion, having the proper orderings and some explanation helps to understand what is going on. My rationale for the used orderings (also included as comment):

When running `Once` we deal with multiple atomics: `Once.state_and_queue` and an unknown number of `Waiter.signaled`.
* `state_and_queue` is used (1) as a state flag, (2) for synchronizing the data that is the result of the `Once`, and (3) for synchronizing `Waiter` nodes.
    - At the end of the `call_inner` function we have to make sure the result of the `Once` is acquired. So every load which can be the only one to load COMPLETED must have at least Acquire ordering, which means all three of them.
    - `WaiterQueue::Drop` is the only place that may store COMPLETED, and must do so with Release ordering to make result available.
    - `wait` inserts `Waiter` nodes as a pointer in `state_and_queue`, and needs to make the nodes available with Release ordering. The load in its `compare_and_swap` can be Relaxed because it only has to compare the atomic, not to read other data.
    - `WaiterQueue::Drop` must see the `Waiter` nodes, so it must load `state_and_queue` with Acquire ordering.
    - There is just one store where `state_and_queue` is used only as a state flag, without having to synchronize data: switching the state from INCOMPLETE to RUNNING in `call_inner`. This store can be Relaxed, but the read has to be Acquire because of the requirements mentioned above.
* `Waiter.signaled` is both used as a flag, and to protect a field with interior mutability in `Waiter`. `Waiter.thread` is changed in `WaiterQueue::Drop` which then sets `signaled` with Release ordering. After `wait` loads `signaled` with Acquire and sees it is true, it needs to see the changes to drop the `Waiter` struct correctly.
* There is one place where the two atomics `Once.state_and_queue` and `Waiter.signaled` come together, and might be reordered by the compiler or processor. Because both use Aquire ordering such a reordering is not allowed, so no need for SeqCst.

cc @matklad
bors added a commit that referenced this pull request Nov 10, 2019
Rollup of 7 pull requests

Successful merges:

 - #65719 (Refactor sync::Once)
 - #65831 (Don't cast directly from &[T; N] to *const T)
 - #66048 (Correct error in documentation for Ipv4Addr method)
 - #66058 (Correct deprecated `is_global` IPv6 documentation)
 - #66216 ([mir-opt] Handle return place in ConstProp and improve SimplifyLocals pass)
 - #66217 (invalid_value lint: use diagnostic items)
 - #66235 (rustc_metadata: don't let LLVM confuse rmeta blobs for COFF object files.)

Failed merges:

r? @ghost
@bors
Copy link
Contributor

bors commented Nov 10, 2019

⌛ Testing commit b05e200 with merge 57a5f92...

@bors bors merged commit b05e200 into rust-lang:master Nov 10, 2019
@pitdicker pitdicker deleted the refactor_sync_once branch November 10, 2019 07:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants