Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Client Side Flow Control model for the LES protocol
Any node which takes on a server role inside the LES protocol needs to be able to somehow limit the amount of work it does for each client peer during a given time period. They can always just serve requests slowly if they are overloaded, but it would definitely be beneficial to give some sort of flow control feedback to the clients. This way, clients could (and would have incentive to) behave nicely and not send requests too quickly in the first place (and then possibly timeout and resend while the server is still working on them). They could also distribute requests better between multiple servers they are connected to. And if clients can do this, servers can expect them to do this and drop them instantly if they break the flow control rules.
Let us assume that serving each request has a cost (depending on type and parameters) for the server. This cost is determined by the server, but it has an upper limit for any valid request. The server assigns a "buffer" for each client from which the cost of each request is deduced. The buffer has an upper limit and a recharge rate (cost per second). The server can decide to recharge it more quickly at any time if it has more free resources, but there is a guaranteed minimum recharge rate. If a request is received that would drain the client's buffer below zero, the client broke the flow control rules and it is instantly dropped.
The server announces three parameters during handshake (RLP data types noted after each):
- Buffer Limit (
- Maximum Request Cost table (
- Minimum Rate of Recharge (
It sets the Buffer Value (
BV) of the client to
BL. If a request is received from a client, it calculates the cost according to its own estimates (but not higher than
MaxCost, which equals to
N is the number of individual elements asked in the request), then deducts it from
BV goes negative, drops the peer, otherwise starts serving the request. The reply message contains a
BV value that is the previously calculated
BV plus the amount recharged during the time spent serving. Note that since the server can always determine any cost up to
MaxCost for a request (and a client should not assume otherwise), it can drop a client without even processing the message if it receives one while
MaxCost because that's already a protocol breach.
The client always has a lowest estimate for its current
- doesn't send any request to the server when
MaxCostwhen sending a request
BLEat the rate of
MRRwhen less than
- when a reply message with a new
BVvalue is received, it sets
MaxCost), summing the
MaxCostvalues of requests sent after the one belonging to this reply.