Skip to content
This repository has been archived by the owner on Sep 13, 2018. It is now read-only.

Current implementation of poll_ready is not very useful #5

Closed
tailhook opened this issue Oct 21, 2016 · 4 comments
Closed

Current implementation of poll_ready is not very useful #5

tailhook opened this issue Oct 21, 2016 · 4 comments

Comments

@tailhook
Copy link

This elaborates on rust-lang/futures-rs#213 (comment)

Would you be able to elaborate on "I don't think it can be useful in the current form"?

Sure. I think we discussed that in chat, but let me summarize my opinion.

If you're implementing a client:

  1. There is no way for client implementation to notify caller that service is ready again

  2. There is no guarantee that poll_ready() is called before call(), so you end up with two options, both render poll_ready() useless (i.e. you can do the same with SinkError and without poll_ready()):

    a) arrange your call method to be able to accept requests anyway (i.e. queue it internally)
    b) always return an error when your service is not ready, so caller repeats request on its own

My own use case at the moment of the last discussion was: I have a pool of connection to the database and I want to send my next request to a less loaded connection. It might be implemented using different traits like:

trait InFlight { fn get_requests(&self) -> u64; }
trait BufferSizes { fn get_bytes_buffered(&self) -> u64; }

And I might implement a connection pool with different load-balancing strategies:

struct DistributeByNumberOfInflightRequests<S: Service + InFlight>(S);
struct DistributeByBytesBuffered<S: Service + BufferSizes>(S);
struct DistributeRoundRobin<S: Service>(S); // for any service

If you're implementing a server, it's unclear when a transport might call poll_ready(), I can see two options:

  1. Before accepting a connection
  2. Before reading next portion of bytes from the socket

Still, when we already read a request there is no guarantee that poll_ready() returns true, so transport layer have to be prepared to queue at least this one request.

At a glance, it's still cool to stop accepting connection on heavy load, but the fact that there is no notification of when the service is ready again makes it's very error-prone. I.e. the most obvious way to handle NotReady is to timeout and try again, but if you look at the system that is able to process 100k requests per second at a steady rate, it may have their queues filled up 100 times a second. Which means just adding a timeout of anything larger than 10 ms may starve service usage for an arbitrarily large period of time. I mean such a system is very unpredictable, it may stop accepting connections using 10% CPU or alike.

Just adding a notification mechanism to poll_ready() is not useful too, because it's just an equivalent of the queueing requests somewhere.

On the other hand, it's possible to make a pushback mechanism based on statistics. I.e. create a middleware that counts the fraction of requests which got SinkErrors and applies a rate-limit for a number of accepted connections per second if there were more than 10% of errors in the last second.

And a per-connection pushback is usually handled by tracking the number of in-flight requests.


Sorry for the long write-up, but at the end of the day:

  1. There are alternatives to explore, before making a standard
  2. There are no known good strategies both for implementing and for using poll_ready()
  3. poll_ready() can be added later with a default implementation that is always ready, like it currently is in every example out there
@alexcrichton
Copy link
Contributor

cc @aturon

@aturon
Copy link
Contributor

aturon commented Oct 25, 2016

FWIW I share some of these concerns, and think we should remove poll_ready for the 0.1 release. As you say, it can be added later, and I think we need to take more comprehensive steps for backpressure in future releases.

@aturon
Copy link
Contributor

aturon commented Oct 25, 2016

cc #6

@aturon
Copy link
Contributor

aturon commented Jan 30, 2017

Closing as resolved for 0.1. We'll be revisiting the backpressure more generally soon.

@aturon aturon closed this as completed Jan 30, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants