-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The future of balance::Pool #456
Comments
First of all, the pool implementation is broken: tower-rs/tower#456 Second, I'm tired of having benchmarks give highly volatile (and sometimes just straight-up bad) results just because of connections being dynamically managed.
@jonhoo I think that's all right. My main question is if/how we can evolve |
I'm not sure — I think the big question is how we can add a notion of service priority to |
Do we want to maybe move it to its own crate? |
That could make sense — I think in its current form it's not super useful, but I do think there's exists a reasonable abstraction here. Whether it'd be better to write that from scratch or start from this I'm not entirely sure. |
IMO, it would be best for someone with an actual use case for this to pursue this. For instance, it may make sense to extract hyper's connection pool into a reusable component (if hyper were actually going to use this). Otherwise, unused code isn't worth the maintenance cost. |
Per #456, there are a number of issues with the `balance::Pool` API that limit its usability, and it isn't widely used. In the discussion on that issue, we agreed that it should probably just be removed in 0.5 --- it can be replaced with something more useful later. This branch removes `balance::Pool`. CLoses #456.
Per #456, there are a number of issues with the `balance::Pool` API that limit its usability, and it isn't widely used. In the discussion on that issue, we agreed that it should probably just be removed in 0.5 --- it can be replaced with something more useful later. This branch removes `balance::Pool`. CLoses #456.
I think
balance::Pool
, in its current form, should be removed from tower. This is for a couple of reasons:Pool
will always try to balance just under capacity. Imagine you have a system whereN
connections are needed to keep up with load.Pool
will pretty quickly get toN
. But when it does, no calls topoll_ready
will returnPending
any more, since the system is keeping up. So,Pool
will start to lower its estimate of the current load. Eventually, it will drop below the bar the user set, no matter what it is. At that point,Pool
will drop a connection, so we're down toN-1
. The system will no longer be keeping up, so eventuallyPool
will create a new connection to satisfy the failingpoll_ready
calls. Then, the cycle begins anew. It is possible to fix this by having it take into account how many read services there are, and only count apoll_ready -> Ready
event as overprovisioning if the number of ready services is>1
, but:poll_ready -> Pending
), they create a new connection (up to the limit). The pool then keeps track of the last time each connection was used. If a connection sits idle for some user-configured timeT
, it is dropped. This is much easier to configure than the parameters toPool
, and avoids the flip-flopping that the currentPool
can experience when load fluctuates a lot.All in all, this suggests to me that the
Pool
we have should be removed, and probably replaced with something else later on (doesn't have to be at the same time).Exactly what that else is warrants some discussion:
With a connection pool that tracks idle time, the pool needs to preferentially use connections with the lowest idle time. We should hopefully be able to combine this with
ReadyCache
, though it would need to be augmented with something like a heap to be able to efficiently pick out the ready service that has been most recently used. There's also a question of how this might be combined withp2c
— maybe we could have it sample just the two most recently used connections and pick the one with the lower load? If it picked randomly (like it currently does), the connections would all be used over time, and it's unlikely that any of them would expire (I think). @olix0r may have useful thoughts here.Separately from ^, there's also a question of whether
Pool
should even function the way it currently does. At the moment, you create aPool
from aMakeService
, and thePool
implements the sameService
asMakeService::Service
. This requires mutable access to thePool
, or the use of aBuffer<Pool>
. A different way to implement the pool is as aService<Response = MakeService::Service>
. You poll it for a service, which you can then use (potentially for multiple requests), and then you return it to the pool when you're done (by dropping it probably). This has the attractive property that the caller can avoid going through the shared pool for every request in a sequence, and instead just continue to use the connection they were given. Of course the downside is that you now need to remember to return the service, and there's probably some synchronization needed for that (though maybe just anmpsc
?). Load also won't be spread as evenly (though is that a problem?).Ultimately, I wonder if we may want both kinds of pool. One is for spreading load across multiple connections, and one is for sharing connections across many consumers. They aren't really the same use case.
The text was updated successfully, but these errors were encountered: