Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

leaky connection pool #72

Closed
tanner0101 opened this issue Jul 14, 2020 · 8 comments
Closed

leaky connection pool #72

tanner0101 opened this issue Jul 14, 2020 · 8 comments
Labels
enhancement New feature or request
Projects

Comments

@tanner0101
Copy link
Member

Consider supporting "leaky" connection pooling where requests for connections beyond the pool's maximum limit will create temporary connections.

See https://gitlab.com/Mordil/RediStack/-/merge_requests/116/diffs#cba55883b2f54457e6530e8a988688bc63337457_0_80

@tanner0101 tanner0101 added the enhancement New feature or request label Jul 14, 2020
@tanner0101 tanner0101 added this to To Do in Vapor 4 via automation Jul 14, 2020
@MrMage
Copy link
Contributor

MrMage commented Jul 15, 2020

I am using something similar in a personal project. My pools have min and max sizes. They can grow up to the min sizes, and connections in excess of the min size that have not been used for a while automatically get released.

@MrLotU
Copy link
Sponsor Member

MrLotU commented Jul 15, 2020

Not sure if I misunderstand, but at that point, what's the point of having a max size at all. Making it fully leaking would mean there is only a min, not a max. Correct?

Also, this would not fix the deadlock issue, it'd just take you longer to run into it (given you do still have some kind of max). If you have no max at all, at some point you would just exhaust your database

@MrMage
Copy link
Contributor

MrMage commented Jul 15, 2020

Personally, I am not a fan of letting the app create an unlimited number of connections. Postgres does not support that many parallel connections being open at a given time, so one service opening too many connections could impact all services depending on that DB. So you are going to have a maximum number of connections either way, and I prefer to define that manually based on expected load rather than having less visibility into that when it is implicitly defined by the DB's scalability.

IMHO a min/max pool offers the best of both worlds:

  • low "idle" resource usage (due to pruning inactive connections) when load is low
  • good scalability (up to a given point) while maintaining good latency (thanks to pooling) under load
  • a built-in "circuit breaker" to prevent overloading the DB

@crarau
Copy link

crarau commented Sep 25, 2022

@MrMage I got into the same situation,
I have a swift application deployed on K8s with many pods running and I'm getting into the MAX number of connections Postgres can have open.
Here is a use-case:
For each pod I define the max number of connections, then I have a spike in load and the async kit properly scales up, but then the number of connections to the DB get increased.
Then a new request comes in BUT another pod is used that doesn't have enough connections and when trying to get one it notices that it reached the DB allowed limit.

To overcome this issue I have changed the max_connection in Postgres to a big number, like 500, which is a dirty hack.
Ideally, like you said, the connections should be closed after not being used for a while with a min number of connections open.

You mentioned you have a solution, can you please share? Thx,

@MrMage
Copy link
Contributor

MrMage commented Sep 26, 2022

Here's what I am using with Vapor 3: https://gist.github.com/MrMage/6fe071f405ac2c1b8a4cde25f05061db

Feel free to use this however you please; it might need some or a lot of tweaking to work with a vanilla Vapor 3 set up, however. Also, no guarantees of correctness or suitability for a particular purpose, of course.

This works well for me because I don't have that many Pods, and many of them will never receive that many requests, anyway. If your requests are scattered across a ton of Pods, you might still into trouble with this. What could help would be to reduce the min/max connections for Pods that don't need many connections, and in general increase the prune frequency.

But if even that's not enough, you might need to use a Postgres connection pool software like e.g. pgBouncer.

@crarau
Copy link

crarau commented Sep 26, 2022

@MrMage thank you. The code looks exactly in the direction needed.
I was thinking of simple specifications instead of the min/max, I could simplify and just use the connection "idle time". So if a connection is not used for let's say 5 minutes to close it, even if the pool remains without any opened connections.
Then it's up to the one using the pool to tweak this number.
The idea is that the library client will have to juggle between the memory used by keeping the connection opened, the max connections of the db and the time it takes to create another connection.

@MrMage
Copy link
Contributor

MrMage commented Oct 10, 2022

@crarau whatever works for you :-) My implementation already closes connections that are idle for longer than config.idleTimeForPruning, and it automatically tries to re-use the "freshest" connections; if it always used the "most stale" connections, none of the connections would ever exceed the idle time threshold. So if you simply set minResources to 0, this might do what you need. You could also set this to 1, for slightly improved "cold start" latency, while still keeping the "baseline" resource usage low. You could probably also reduce pruneInterval to e.g. 5 seconds; it shouldn't be too resource-intensive.

@gwynne
Copy link
Member

gwynne commented May 6, 2023

Closing this out as not planned; it will be superseded by other work already in progress elsewhere (this package is probably going away at some point).

@gwynne gwynne closed this as not planned Won't fix, can't repro, duplicate, stale May 6, 2023
Vapor 4 automation moved this from To Do to Done May 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Vapor 4
  
Done
Development

No branches or pull requests

5 participants