Skip to content

Connection timeout Pool vs Non pooled #672

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
carlos-ferreira opened this issue Jul 26, 2019 · 7 comments
Closed

Connection timeout Pool vs Non pooled #672

carlos-ferreira opened this issue Jul 26, 2019 · 7 comments

Comments

@carlos-ferreira
Copy link

carlos-ferreira commented Jul 26, 2019

We're having a behavior that we don't understand.

We have a slow stored procedure due to a huge amount of data, on a server with pooling enabled, we make 3 requests, after the timeout the other requests fail as well.

If we make the same experiment without pooling enabled, the connections don't timeout and run with success although taking very long time to complete (3 mins avg).

Why does this happen ?

Could this be a bug or is there an explanation ?

@bgrainger
Copy link
Member

I can't tell why this is happening from your description.

Logging might be necessary to provide more information: https://mysqlconnector.net/overview/logging/

You should probably enable Debug-level logging, but be aware that this may disclose some sensitive information (such as query bodies) so don't post it publicly. (You can email it to me directly.)

@carlos-ferreira
Copy link
Author

Just sent the logs.

One important difference, the thing i notice is different on both scenarios is the timeout.
When using connection pooling, the request times out at 30s, without pooling the query runs until we receive a reply from mysql server.

We're using AWS Aurora, if that matters.

@bgrainger
Copy link
Member

I don't think I got any logs. Try sending them to logs@mysqlconnector.net.

@bgrainger
Copy link
Member

One important difference, the thing i notice is different on both scenarios is the timeout.
When using connection pooling, the request times out at 30s, without pooling the query runs until we receive a reply from mysql server.

I observed this, too.

I assume you're not changing the DefaultCommandTimeout in your non-pooling connection string? If so, that seems like it might be a bug that the default timeout is not respected when Pooling=false. I'll create a new issue to look into that.

This also suggests that a workaround might be to set DefaultCommandTimeout=60 in your pooling connection string (since the command times out at around 42 seconds).

@bgrainger
Copy link
Member

There's a bug in MySqlConnector that causes the command's timeout (from MySqlCommand.CommandTimeout or from DefaultCommandTimeout in the connection string) to be set to "infinity" when a nested MySqlDataReader is executed. This happens when we need to get the definition of a stored procedure.

For a pooled connection, this is retrieved once and cached. For a non-pooled connection, it has to be retrieved every time (because there's no pool that can hold the cached definitions). So this is the root of the difference. For a pooled connection, the stored procedure definition is usually cached, so there is no nested MySqlDataReader, the timeout is applied, and the command times out. For a non-pooled connection, we always have to retrieve the definition, the timeout is reset to infinity and the stored procedure runs to completion without timing out.

I'll fix this to always respect the command timeout.

on a server with pooling enabled, we make 3 requests, after the timeout the other requests fail as well

Are you saying that the connection is unusable after the command times out? This is currently the known behaviour of this library: #453.

As detailed there, you could workaround that by setting the CommandTimeout to 0 and using a CancellationToken from a timeout. This will graciously terminate the query by sending a KILL QUERY command (from a different thread), returning the connection to a known good state.

@carlos-ferreira
Copy link
Author

Are you saying that the connection is unusable after the command times out? This is currently the known behaviour of this library: #453.

Actually i don't think is an issue on your library.
We're using micro services, and several micro services are making calls to the micro service which uses database that has timeouts.

So the first request that results in a time out, keeps running on the database despite of the timeout and consumes database resources (mostly CPU). When the next requests come and are executed, they eventually timeout as well, because the database has a long running query already destroying the performance of the database not because of the driver.

As detailed there, you could workaround that by setting the CommandTimeout to 0 and using a CancellationToken from a timeout. This will graciously terminate the query by sending a KILL QUERY command (from a different thread), returning the connection to a known good state.

Sounds very helpfull!
I'll try it out when i have the time.

@bgrainger
Copy link
Member

Fix shipped in 0.57.0-rc1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants