Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxConns for Pool #4

Closed
elithrar opened this issue Jul 29, 2015 · 5 comments
Closed

MaxConns for Pool #4

elithrar opened this issue Jul 29, 2015 · 5 comments

Comments

@elithrar
Copy link

Pool currently provides a size parameter that sets an upper limit on the number of idle connections, but doesn't appear to set upper bounds on the number of connections that might be dialled under pool.Get.

Redis will bounce connections above its connection limit, although from what I can discern p.df(p.Network, p.Addr) won't timeout (by default) and therefore will block until it connects.

It might be worthwhile instead blocking locally until the available connections in the pool are < max rather than attempting to open new ones over the limit.

@mediocregopher
Copy link
Owner

I'm sorry it's taken so long to respond, for some reason github decided I shouldn't receive notifications for issues on the repo >_<

As to your issue: to my knowledge there's no way to block on a channel based on its current size. I'm not even certain the length of a channel could be checked in a thread-safe way without use of a lock or another channel.

One possible alternative would be to make it possible for Get to not open new connections but instead for it to check the channel, and loop if the channel is empty (with a small sleep). The problem here is that this doesn't handle the case of connections in other routines being closed for one reason or another and not being returned to the pool. If Get were to have this behavior there would need to be some mechanism by which dead connections are replaced, something which happens implicitly in the current setup. I don't believe Put should do the replacing since we don't want it to block on making a new connection, it would have to be a new goroutine spawned by Put. Now we're in the territory of a package spawning go-routines the user didn't necessarily expect it to, and all the tradeoffs which go with that.

I'm not necessarily opposed to the idea, but I don't believe it will be a straightforward fix by any means. Could you perhaps expand on why this behavior is valuable to you? I personally have never desired it nor have anyone I've worked with or talked to, so I'm curious where you're coming from.

@elithrar
Copy link
Author

Yeah, I definitely understand the limitations of channels (as they stand)
for this.

The major use-case here is when using hosted Redis instances (Heroku,
Compose, Redis Labs) that set a limit on the number of concurrent
connections you can make.

The current behavior—trying to create new connections above the bounds and
then failing—isn't terrible, but I'm curious to see if there's appetite for
an alternative.
On Tue, 15 Sep 2015 at 3:24 am Brian Picciano notifications@github.com
wrote:

I'm sorry it's taken so long to respond, for some reason github decided I
shouldn't receive notifications for issues on the repo >_<

As to your issue: to my knowledge there's no way to block on a channel
based on its current size. I'm not even certain the length of a channel
could be checked in a thread-safe way without use of a lock or another
channel.

One possible alternative would be to make it possible for Get to not open
new connections but instead for it to check the channel, and loop if the
channel is empty (with a small sleep). The problem here is that this
doesn't handle the case of connections in other routines being closed for
one reason or another and not being returned to the pool. If Get were to
have this behavior there would need to be some mechanism by which dead
connections are replaced, something which happens implicitly in the current
setup. I don't believe Put should do the replacing since we don't want it
to block on making a new connection, it would have to be a new goroutine
spawned by Put. Now we're in the territory of a package spawning
go-routines the user didn't necessarily expect it to, and all the tradeoffs
which go with that.

I'm not necessarily opposed to the idea, but I don't believe it will be a
straightforward fix by any means. Could you perhaps expand on why this
behavior is valuable to you? I personally have never desired it nor have
anyone I've worked with or talked to, so I'm curious where you're coming
from.


Reply to this email directly or view it on GitHub
#4 (comment)
.

@mediocregopher
Copy link
Owner

I think for that use-case even having a limit on the pool size is really just moving the goal-post. Either way there is some kind of limit on number of connections to redis that your are hitting and which is interrupting normal program flow in one way or another. I think for this case it would be better to investigate why you're hitting these limits at all, and trying to prevent it (either on the application side, or with redis cluster).

Sorry it's kind of a glib and non-helpful answer, just giving my perspective. As a stop-gap solution it would be pretty straightforward to write a wrapper around pool.Get which would detect the error which is returned when heroku hits the limit and simply sleep and try again later. So you would have the blocking behavior you want, and probably more control over it than you would get if it was baked in.

@elithrar
Copy link
Author

Thanks—note that I'm not hitting the limit, and obviously the response
would be to upgrade your plan (if it was a regular occurrence), but I'm
also thinking defensively in this case.

The best approach is looking like catching the error and retrying 1-2 times
before bailing out in full.

On Fri, Sep 18, 2015 at 1:25 AM Brian Picciano notifications@github.com
wrote:

I think for that use-case even having a limit on the pool size is really
just moving the goal-post. Either way there is some kind of limit on number
of connections to redis that your are hitting and which is interrupting
normal program flow in one way or another. I think for this case it would
be better to investigate why you're hitting these limits at all, and trying
to prevent it (either on the application side, or with redis cluster).

Sorry it's kind of a glib and non-helpful answer, just giving my
perspective. As a stop-gap solution it would be pretty straightforward to
write a wrapper around pool.Get which would detect the error which is
returned when heroku hits the limit and simply sleep and try again later.
So you would have the blocking behavior you want, and probably more control
over it than you would get if it was baked in.


Reply to this email directly or view it on GitHub
#4 (comment)
.

@mediocregopher
Copy link
Owner

I'll think about if there might be a cleaner way to do this, but for now I think the wrapper is the way to go. I'm gonna go ahead and close this, feel free to reopen if you have any other ideas or further problems :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants