New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Asynchronous mutex #167
Comments
Thanks for the report. You bring up some good points. I do have a few thoughts. Given the asynchronous nature of Tokio w/ futures + lightweight tasks, the ideal strategy of linearizing data access is going to be with message passing. So, if there is a piece of data that you wish to guard, you will assign ownership to a task and "access" the data by sending messages to the task. At a high level, this is the actor pattern. That being said, message passing comes with synchronization cost no matter what, and the messages being passed back and forth will probably require some sort of allocation (and allocators internally have synchronization requirements too). So, all of this means that message passing is not always going to be cheaper than just using a mutex in the first place. It all depends on how much work is being done in the critical section. Another point is that you don't have to use a single threaded event loop, you could run all the events on a thread pool (for example futures-cpupool, in which case blocking the thread is not that big of a deal. tl;dr, an asynchronous mutex is not going to be a cheap construct and what to use is not always clear cut. Anyway, I'm going to close this issue as any asynchronous mutex implementation would belong in another library, but I hope the explanation made sense :) |
Thanks for the thoughtful and thorough response. I think it's worth pointing out the dangers of using mutex in Tokio code somewhere in the documentation, since it already seems to be a popular hammer that people are using. Previously being a long-time core developer of another asynchronous event library for Python, I have a strong premonition that people might end up blaming tokio itself when their apps are slow or unresponsive when they throw mutexes around willy-nilly to deal with shared state. |
In case anyone finds this issue while looking for a futures-based asynchronous mutex implementation, there is now one: https://github.com/proman21/futures-mutex |
I skimmed through it, I'm not sure how it is safe under the current Future contract. This seems to have the same issues that |
Well, that's too bad! I haven't actually looked at the code at all, I just noticed someone talking about it on IRC. Can you explain just a little bit more about what's unsafe about it (or link to something about |
I filed a bug: proman21/futures-mutex#1 (comment) |
This is actually something I've been trying to work with in a futures and Tokio based library on my own. I've had to reimplement my concurrent access strategy so many times. At this point I've settled for an actor model to make this access cleaner and more manageable. I think that's the best option for my library, and it may be a strategy that works well for others. |
I think it's important for there to be an asynchronous mutex somewhere in the Tokio ecosystem.
Imagine we have a web server that handles two endpoints:
/get
and/info
./info
just returns some static data from memory without acquiring a mutex./get
acquires a mutex and returns some information out of it.If
/get
uses normal blocking mutexes, then one client can make thousands of requests to/get
all at once, and they will be serialized, which is what we want (presumably). However, if another client requests/info
during this process, they will be unnecessarily blocked by the mutex, since the mutex not only serializes/get
, it also blocks the entire event loop -- the thing that one must avoid when doing event-oriented programming.I may have missed development toward an async mutex somewhere, but I googled around and searched some of the tokio issue trackers and couldn't find one -- however I am seeing a lot of tokio-using code using mutexes.
The text was updated successfully, but these errors were encountered: