New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
leaky connection pool #72
Comments
I am using something similar in a personal project. My pools have min and max sizes. They can grow up to the min sizes, and connections in excess of the min size that have not been used for a while automatically get released. |
Not sure if I misunderstand, but at that point, what's the point of having a max size at all. Making it fully leaking would mean there is only a Also, this would not fix the deadlock issue, it'd just take you longer to run into it (given you do still have some kind of max). If you have no max at all, at some point you would just exhaust your database |
Personally, I am not a fan of letting the app create an unlimited number of connections. Postgres does not support that many parallel connections being open at a given time, so one service opening too many connections could impact all services depending on that DB. So you are going to have a maximum number of connections either way, and I prefer to define that manually based on expected load rather than having less visibility into that when it is implicitly defined by the DB's scalability. IMHO a min/max pool offers the best of both worlds:
|
@MrMage I got into the same situation, To overcome this issue I have changed the max_connection in Postgres to a big number, like 500, which is a dirty hack. You mentioned you have a solution, can you please share? Thx, |
Here's what I am using with Vapor 3: https://gist.github.com/MrMage/6fe071f405ac2c1b8a4cde25f05061db Feel free to use this however you please; it might need some or a lot of tweaking to work with a vanilla Vapor 3 set up, however. Also, no guarantees of correctness or suitability for a particular purpose, of course. This works well for me because I don't have that many Pods, and many of them will never receive that many requests, anyway. If your requests are scattered across a ton of Pods, you might still into trouble with this. What could help would be to reduce the min/max connections for Pods that don't need many connections, and in general increase the prune frequency. But if even that's not enough, you might need to use a Postgres connection pool software like e.g. pgBouncer. |
@MrMage thank you. The code looks exactly in the direction needed. |
@crarau whatever works for you :-) My implementation already closes connections that are idle for longer than |
Closing this out as not planned; it will be superseded by other work already in progress elsewhere (this package is probably going away at some point). |
Consider supporting "leaky" connection pooling where requests for connections beyond the pool's maximum limit will create temporary connections.
See https://gitlab.com/Mordil/RediStack/-/merge_requests/116/diffs#cba55883b2f54457e6530e8a988688bc63337457_0_80
The text was updated successfully, but these errors were encountered: