-
-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Notify instead of InternalsGuard #164
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #164 +/- ##
==========================================
+ Coverage 72.12% 72.96% +0.84%
==========================================
Files 6 6
Lines 599 603 +4
==========================================
+ Hits 432 440 +8
+ Misses 167 163 -4
☔ View full report in Codecov by Sentry. |
Has there been any progress with the tests? This PR have any issues as 'blocked' waiting for some discussion or support ... ? |
Not really, just waiting for djc to take a look. |
Sorry for taking so long to look at this. So if I understand correctly, this fix will make the pool less fair? And also, more tasks have to do work if a number of them are waiting for a connection to become available, whereas previously they would just spend that time sleeping? Because, while the previous design with |
I don't think so. Notify is fair according to the docs and |
while waiting on djc/bb8#164 to ship
I took a look at the implementation of [1] https://docs.rs/tokio/latest/src/tokio/sync/notify.rs.html#985 |
I tried to simplify this a bit, please take a look at #186. |
Potential fix for #154
Problem
InternalsGuard
is problematic because of the deadlock. Using aReentrantLock
is not possible because bothput
anddrop
are holding a mutable reference to the internals. UsingRefCell
won't work also, because it doesn't allow two or more mutable borrows, which is what's required to make this work, a violation of Rust safety guarantees.Possible Solution
Switch to using Tokio's
Notify
that provides a fair queue for waiters. There is no more internal pool guarantee that connections are fairly given to the tasks that have waited the longest, but this fairness may be good enough if the Tokio scheduler and the kernel scheduler ensure fairness as well. Starvation is possible if the caller infinitely retries calls toget
.Additionally, the entire
make_pooled
function is timed out using theconnection_timeout
setting. This ensures that there is no internal starvation & thatis_valid()
is timed out as well; if it isn't, we can starve all tasks & block the caller indefinitely while waiting for a promise.Open Questions
The error forwarding is not clear to me. I removed it, but I'm not entirely sure what it does. May need some help here to understand if I broke something.