-
-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reentrant deadlock #154
Comments
Thanks for digging into this! Would you be able to submit a PR to fix it? |
Fixing it with However I see this repo very recently had this change #152 which makes that trivial fix impossible. Any other fix is going to require a design change as to how the connection is managed throughout the code, which puts us at a crossroads here. |
What kind of design changes? Would it introduce a lot of extra complexity or just move the code around? |
@nmldiegues I think I am running into the same issue and I am able to reproduce it. I tried your patch I am a getting panic
|
I've submitted #186 which tries to fix this. |
To get djc/bb8#186 and djc/bb8#189 which fix potential deadlocks (djc/bb8#154)
To get djc/bb8#186 and djc/bb8#189 which fix potential deadlocks (djc/bb8#154)
To get djc/bb8#186 and djc/bb8#189 which fix potential deadlocks (djc/bb8#154). Also, this (djc/bb8#225) was needed to prevent a connection leak which was conveniently spotted in our integration tests.
* Update bb8 to 0.8.6 To get djc/bb8#186 and djc/bb8#189 which fix potential deadlocks (djc/bb8#154). Also, this (djc/bb8#225) was needed to prevent a connection leak which was conveniently spotted in our integration tests. * Ignore ./.bundle (created by dev console) --------- Co-authored-by: Jose Fernandez (magec) <joseferper@gmail.com>
We're using bb8 (thanks for doing it!) in a service handling lots of traffic, with bb8 used millions of times per minute.
Today we hit a deadlock that seems to be induced by a single task/thread scenario due to reentrancy:
This is using tag v0.8.0.
If you ignore the boilerplate, you'll see essentially that:
PoolInternals#PoolInternals
InternalsGuard
InternalsGuard
is in the call stack, everything gets dropped (this is possible in our case because requests are handled in a tokio runtime with a timeout per request, and timeout happened here)InternalsGuard#drop
is called, which tries to acquire the internals lock <-- and voila, we hit a deadlock since the lock is not reentrantThe text was updated successfully, but these errors were encountered: