Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't use an asynchronous mutex #74

Closed
Darksonn opened this issue Jul 21, 2020 · 6 comments
Closed

Don't use an asynchronous mutex #74

Darksonn opened this issue Jul 21, 2020 · 6 comments

Comments

@Darksonn
Copy link

The mutex around internals is never held locked across an .await as far as I can see, so it should not be using an asynchronous mutex. Changing to a synchronous mutex would allow you to fix the following issues:

Spin lock

https://github.com/khuey/bb8/blob/e2978dfb1ac86842901e344efcf198dc75fab488/bb8/src/lib.rs#L628-L632

Use of block_on in destructor

https://github.com/khuey/bb8/blob/e2978dfb1ac86842901e344efcf198dc75fab488/bb8/src/lib.rs#L798-L809

@djc
Copy link
Owner

djc commented Jul 23, 2020

I guess I'm using the async mutex for the other reason, namely that I don't want bb8 to block the application task if the mutex is held by another task. Are you saying that's not a good reason to use an async mutex?

Edit: also, thanks for reviewing the code at this level of detail!

@Darksonn
Copy link
Author

Darksonn commented Jul 23, 2020

As long as the mutex is only ever held locked for very short amounts of time, the amount of blocking the mutex introduces is not an issue. The issue with blocking the thread is that the task doesn't yield back to the executor, thus not allowing other tasks to run. However, very short durations of blocking do not cause these issues. This is for the same reason that it is ok to perform small amounts of CPU-bound code inside an asynchronous task. Note that asynchronous mutexes have a synchronous mutex inside.

I have written a bit about this here, and we also talk about it in the mini-redis example. Additionally, our new tutorial has a section on this.

It is also worth mentioning that your destructor can deadlock in some edge cases, as Tokio's mutex hooks into Tokio's coop system.

@djc
Copy link
Owner

djc commented Jul 23, 2020

Thanks, that's helpful. I would argue in this case "Additionally, when you do want shared access to an IO resource, it is often better to spawn a task to manage the IO resource, and to use message passing to communicate with that task." is applicable (and I think this would likely make the implementation more clear). What do you think?

@Darksonn
Copy link
Author

Darksonn commented Jul 23, 2020

That is certainly also a reasonable approach. The main disadvantage is that it requires spawning, tying you closely to the executor, but you are already doing that. In my opinion, both are reasonable approaches here.

If you want to pursue this direction, you could quite reasonably make this part of the schedule_reaping task. Here is one reasonable way to approach it:

  1. Use a bounded mpsc channel to submit requests for connections. The message should include an oneshot channel for responding with the connection.
  2. For returning connections after using them, you can use another mpsc channel to send them to the task. This must be an unbounded channel, as you cannot otherwise send on the channel from a destructor. I don't think this is an issue, as backpressure here is not necessary.
  3. When no connections are available, you can either open the connection in the task and respond with the new connection, or respond with a message that tells the requester to open a connection themselves. The exact approach depends on whether you want to be opening multiple connections at the same time.

I imagine your task could look something like this:

let mut open_connections = /* some sort of container */;
loop {
    tokio::select! {
        msg = connection_request.recv() => {
            if let Some(oneshot) = msg {
                oneshot.send(open_connections.get_connection());
            } else {
                // the pool has been dropped
                return;
            }
        },
        msg = return_connection.recv() => {
            // you will be keeping a sender alive in `open_connections`,
            // so this can't fail
            open_connections.put_back(msg.unwrap());
        },
        _ = reaping_interval.tick(), if !open_connections.is_empty() => {
            open_connections.reap();
        },
    }
}

Since open_connections is owned only by the task, no mutex is necessary. You can ignore send errors from the destructor, as those just mean the pool has been dropped, and the task has exited.

@djc
Copy link
Owner

djc commented Jul 23, 2020

Thanks! Do you have any intuition for which approach will result in lower latencies?

@Darksonn
Copy link
Author

Not really.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants