-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Buffer's queue does not remove canceled requests #71
Comments
I believe that this describes two separate issues.
|
This morning after thinking about it some more, I started to lean more in this direction:
Trying to combine the two needs a bit of a dance. So that the inner service in the proxy doesn't keep creating new If |
Ah, sorry, there was another bit that I forgot to say in my previous comment. The |
Yes, that would help if there is a general timeout applied to all requests, since it's unlikely that a request at the front of the queue is not timed out, while another further back is. However, it doesn't take into consideration if a request is canceled for another purpose, such as in the proxy the server connection could be closed (since we coalesce requests to the same target from different connections), or we could have gotten a |
@seanmonstar has this since been fixed? if not, could you re-iterate the issues after #72 landed? |
Due to
Buffer
using afutures::sync::mpsc
channel, anyResponseFutures
that have been dropped will continue to consume space in the queue until the underlying service has progressed through the requests in front of them. ABuffer
could be wrapped inTimeout
, which could cancel the requests if waiting took too long. Theoneshot::Sender
still being somewhere in the queue means the buffer's capacity could become full of canceled requests.There's the additional issue that a wrapped
Reconnect
may wish to only retry a failed connect if there are still response futures waiting, but being in the queue makes it impossible to determine that.This is kind of the "other" half of a pool. hyper does have a queue of waiters internally, and can check when they are canceled, since they are actually in a
VecDeque
. To allow new requests to enter this queue, it's wrapped in anArc<Mutex>
. While perhaps not the best thing in the world, it does work, and people still get excellent performance from hyper's client, so we could consider that as a first pass.Related Conduit issue: linkerd/linkerd2#899
The text was updated successfully, but these errors were encountered: