-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reset session when returning to pool #178
Comments
It seems like this should work; I've created #179 to investigate it. In my current PR, the connection is reset synchronously when The final solution may need to queue the work of resetting the connection (or disposing it) to a threadpool thread so that the client is freed up immediately when closing the connection. Presumably in that implementation, any exceptions would simply be swallowed and just cause the connection to be dropped from the pool. The downside would be that any errors would be silently ignored and only manifest themselves as a performance problem (because the pool would never contain live connections). This would also be a behavioural change from the official connector, although I'm not sure it's one that would affect anyone. However, I did have to turn ConnectionReset off for a test because it was causing a session variable to be reset immediately, rather than having the desired side-effect. @caleblloyd Any thoughts on the suggestion or proposed implementation? |
@MrSmoke what is the Round-Trip time from your client to your server (ping your MySql Host from wherever your client code resides) |
Around 1-2ms. |
How big is the speedup when you turn off connection reset? 2-4ms or is it higher? I'm wondering if what you're seeing is environment specific. |
Another benefit of resetting the connection (in the background) as soon as it's returned to the pool is that it would immediately free up any session-specific server variables (e.g., temp tables). Right now, I'm assuming that those persist in memory on the server until the connection is pulled from the pool and reset. |
Or until the connection is reaped I can see the benefits, my main concern is forcing the reset commands to run over Sync I/O for everyone. The concurrency tests have shown that even 1ms of synchronous execution can cause a huge difference in throughput. What about adapting |
I think one of the major benefits we can provide by doing it asynchronously is that |
I hadn't considered doing it this way, I like your approach! |
Alternatively, we could follow npgsql's approach and prepend the "connection reset" packet to the first query that's sent on a connection retrieved from the pool; see npgsql/npgsql#552. |
#179 was closed in favor of #264, but the later was also closed :( Are there any other approaches we can try to prevent the reset overhead when retrieving connections from the pool? In my case I can't afford to set I've implemented a PoC of a pool of I wouldn't push this pool overlay into production, but it shows that there may be room for improvement on the |
Those were both implementations that weren't viable. This issue remains open to track the problem. I assume it's important to you that the connection reset not happen at all on a "user" thread? I.e., it wouldn't be OK to reset the connection during |
I don't think this would be a problem for me. I have a wrapper around I don't know if this would make sense for the library as a whole, though. |
An approach that I think would be equivalent to mine:
What is the downside of doing it like this? Sorry if I am missing something obvious, I'm not very knowledgable about |
One benefit of the current approach is that resetting the connection (on retrieval from the pool) checks the liveness of the connection. (An idle connection can be closed by the server due to Pinging the connection would be a way to detect the problem, but would reintroduce the latency of a roundtrip to the connection. #821 proposes a different solution, but one that has its own drawbacks. Or we could not test the connection at all, and put the burden of knowing that |
Your considerations are spot on. What do you think about letting the user configure how they want to handle it? Would it introduce too much complexity? In my use case I have a proxy that is supposed to be pretty fast, and one of its bottlenecks is opening connections to the data base. An average DB operation takes about Any suggestions to avoid this overhead? Anyway, thanks for the attention and the remarks :) |
If the current behaviour were changed, it would probably be exposed as an opt-in setting, as it would likely require changes to user code (e.g., retrying
This is what the current connection pooling is designed to solve (unless your proxy also makes resetting a connection slow).
It's not that I don't think there's no way forward with this; I just haven't worked out the ideal solution (and it may be very complex to implement). Just had a thought: if |
This would be perfect for me. |
Opened #831 to track that suggestion. |
Another attempt at solving this problem: https://github.com/mysql-net/MySqlConnector/tree/reset-connection-in-background The new solution is conceptually very simple: closing/disposing a connection starts resetting it (asynchronously) in the background, then returns immediately. A background thread awaits the reset, then returns the connection to the pool. This may cause a few extra connections to be used while the reset happens. |
Benchmarks from the new code:
Local Docker ContainerBefore
After
Remote MySQL (~16ms ping)Before
SummaryIn the new code, BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19042 Platform=X64 Runtime=.NET Core 5.0 |
This is available for testing in 1.3.0-beta.1. |
Added in 1.3.0. |
Hi @bgrainger with the new updates, since the connectionreset is being done in the background, would there be any PING at the Open? |
Yes, by default there is still a PING during Discussion about handling/changing that is at #461 (comment) (and the following comments); probably needs to be split out to a separate new issue. |
@bgrainger with the new implementation of OpenAsync in 1.30+, how come the OpenNoResetAsync is much faster than OpenAsync? If the reset of connection happens in the background then there would be no waiting time for OpenAsync? Also, could you share some specific example query or any statement that is dangerous when using ConnectionReset=false in the connection string? We are also looking at setting the ConnectionReset to false since we want to achieve the best performance as possible. |
I'm not completely sure. (And it's not, for a local Docker container.)
Anything that uses temporary tables or sets variables on the server. |
@bgrainger thank you, It looks like we are not using create temporary table and we dont set variables locally or globally like We will try to use the ConnectionReset=false;ConnectionIdlePingTime=10; |
See #967. |
This was turned off by default in 1.3.10 and removed in 2.0: #1013; reopening this issue. |
I currently have a use case were I get far better performance setting
ConnectionReset
tofalse
but I still want the connection reset for safety purposes.I was thinking that when
ReturnToPool
is called the connection could be reset there (if the enabled) instead of when it's pulled out of the pool, leaving the reset to be done in the background (before being added back into the pool).It would however mean that the pool is easier to deplete.
The text was updated successfully, but these errors were encountered: