Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add handlers to provide some generic backpressure implementations. #6662

Closed
wants to merge 1 commit into from

Conversation

normanmaurer
Copy link
Member

Motivation:

Often what can be done to implement back-pressure is to stop reading from the socket until we were able to flush out enough data. We should provide some generic re-usable handlers.

Modifications:

Add two handlers which will add simple back-pressure when added to the pipeline.

Result:

Include simple handlers to support back-pressure.

@normanmaurer
Copy link
Member Author

@Scottmitch @trustin PTAL... I still need to add some unit tests tho. I wrote these generic implementations while used tcpkali to benchmark netty and using the EchoServer

@normanmaurer
Copy link
Member Author

still thinking about how to test this easily without fire up a real server / client

@normanmaurer
Copy link
Member Author

@Scottmitch PTAL again... added tests.

@normanmaurer normanmaurer self-assigned this May 3, 2017
@normanmaurer normanmaurer added this to the 4.0.47.Final milestone May 3, 2017
Motivation:

Often what can be done to implement back-pressure is to stop reading from the socket until we were able to flush out enough data. We should provide some generic re-usable handlers.

Modifications:

Add two handlers which will add simple back-pressure when added to the pipeline.

Result:

Include simple handlers to support back-pressure.
@trustin
Copy link
Member

trustin commented May 18, 2017

I think it can cause a dead lock by disabling a channel's read when it became unwritable, especially when both the client and the server tries to send something large and they both use this handler. Note that the term 'large' can be highly situational, depending on the size of the socket buffer and the contention.

  1. The server sends a large response. Its channel becomes unwritable and thus read is disabled.
  2. The client sends a large request. Its channel becomes unwritable and thus read is disabled.
  3. Both the server and the client stop reading and the traffic is stalled.

Given that we may or may not have full control over the behavior of a remote peer, a lot of caution should be taken to make this usable.

@normanmaurer
Copy link
Member Author

@trustin thats actually a valid concern... any good idea here ?

@trustin
Copy link
Member

trustin commented May 18, 2017

Often what can be done to implement back-pressure is to stop reading from the socket until we were able to flush out enough data.

My comment above being said,, I'm not sure this is a good way to implement backpressure. To implement it properly:

  • A handler should stop reading when numBytesDecodedSofar - numBytesActuallyConsumedSoFar > someThreshold. This will limit the number of pending unprocessed bytes for all channels. In practice, we have to work with the application layer to know the numBytesDecodedSofar - numBytesActuallyConsumedSoFar part. For example, this is what I'm using for Armeria.
  • When writing to a channel, the writer has to account for the number of pending outbound bytes and should not produce any outbound traffic until it decreases to a certain level. This is what write watermarks in Netty are for and they require cooperation between Netty and application as you know.

@normanmaurer
Copy link
Member Author

@trustin another thing we could do is to "gracefully" slow down reading by suppress read and schedule it again a few ms later.

@trustin
Copy link
Member

trustin commented May 18, 2017

@normanmaurer That will help as well, although the throughput will become noticeably slower when the race condition hits.

@normanmaurer
Copy link
Member Author

@trustin yes of course but it should stabilise at some point again.

@trustin
Copy link
Member

trustin commented May 18, 2017

So.. I think we can introduce the construct similar to what I wrote for Armeria for controlling inbound-side traffic. (We have outbound-side construct already.)

@normanmaurer
Copy link
Member Author

@trustin you mean as a ChannelHandler ?

@trustin
Copy link
Member

trustin commented May 18, 2017

@normanmaurer Not sure. I think it's doable in many ways. It may yield better experience if we integrate this feature into the core like we did for writes. Didn't think much about this though.

@Scottmitch
Copy link
Member

The level above Netty may apply message based flow control instead (or in addition to) byte based flow control, and maybe this can be handled more gracefully at the protocol level (e.g. h2 has streams, reactive sockets has message based flow control). Unless we want to tackle these concerns now perhaps it good enough for Netty to provide the tools and just let higher level applications (which presumably have more context) wire everything together ... wdyt?

@normanmaurer normanmaurer removed this from the 4.0.49.Final milestone Jul 6, 2017
@normanmaurer
Copy link
Member Author

lets close this one then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants