-
Notifications
You must be signed in to change notification settings - Fork 342
Connection timeout Pool vs Non pooled #672
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I can't tell why this is happening from your description. Logging might be necessary to provide more information: https://mysqlconnector.net/overview/logging/ You should probably enable Debug-level logging, but be aware that this may disclose some sensitive information (such as query bodies) so don't post it publicly. (You can email it to me directly.) |
Just sent the logs. One important difference, the thing i notice is different on both scenarios is the timeout. We're using AWS Aurora, if that matters. |
I don't think I got any logs. Try sending them to logs@mysqlconnector.net. |
I observed this, too. I assume you're not changing the This also suggests that a workaround might be to set |
There's a bug in MySqlConnector that causes the command's timeout (from For a pooled connection, this is retrieved once and cached. For a non-pooled connection, it has to be retrieved every time (because there's no pool that can hold the cached definitions). So this is the root of the difference. For a pooled connection, the stored procedure definition is usually cached, so there is no nested I'll fix this to always respect the command timeout.
Are you saying that the connection is unusable after the command times out? This is currently the known behaviour of this library: #453. As detailed there, you could workaround that by setting the CommandTimeout to 0 and using a |
Actually i don't think is an issue on your library. So the first request that results in a time out, keeps running on the database despite of the timeout and consumes database resources (mostly CPU). When the next requests come and are executed, they eventually timeout as well, because the database has a long running query already destroying the performance of the database not because of the driver.
Sounds very helpfull! |
Fix shipped in 0.57.0-rc1. |
We're having a behavior that we don't understand.
We have a slow stored procedure due to a huge amount of data, on a server with pooling enabled, we make 3 requests, after the timeout the other requests fail as well.
If we make the same experiment without pooling enabled, the connections don't timeout and run with success although taking very long time to complete (3 mins avg).
Why does this happen ?
Could this be a bug or is there an explanation ?
The text was updated successfully, but these errors were encountered: