Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race for Netty between cancel and stream creation #17

Closed
ejona86 opened this issue Jan 20, 2015 · 1 comment · Fixed by #59
Closed

Race for Netty between cancel and stream creation #17

ejona86 opened this issue Jan 20, 2015 · 1 comment · Fixed by #59
Assignees

Comments

@ejona86
Copy link
Member

ejona86 commented Jan 20, 2015

AbstractClientStream.cancel won't cancel the stream on the wire if it appears the stream has not yet been allocated, as is described by the comment:

// Only send a cancellation to remote side if we have actually been allocated
// a stream id and we are not already closed. i.e. the server side is aware of the stream.

However, what happens if this is the case, is that the transport is not notified of the stream destruction, and the stream will still eventually be created by the transport and not be cancelled. This issue does not seem a problem with the OkHttp transport, since it allocates the stream id before returning any newly created stream. However, Netty delays id allocation until just before the stream headers are sent, which 1) is always done asynchronously and 2) may be strongly delayed due to MAX_CONCURRENT_STREAMS.

It appears that the optimization in AbstractClientStream should be removed outright and sendCancel's doc be updated to specify the expectation to handle such cases (as opposed to directly cause RST_STREAM). Both OkHttp and Netty seem to be handling such cases already. More importantly, the optimization seems highly prone for races given that id allocation is occurring in the transport thread whereas AbstractClientStream.cancel is happening on some application thread; using the normal synchronization between application and transport threads seems more than efficient enough and simpler.

@ejona86 ejona86 added the bug label Jan 21, 2015
@ejona86
Copy link
Member Author

ejona86 commented Jan 30, 2015

It turns out Netty does not have a race because it blocks until the Stream has been allocated. In NettyClientTransport (notice get()):

// Write the request and await creation of the stream.
channel.writeAndFlush(new CreateStreamCommand(http2Headers, stream)).get();

In NettyClientHandler we can see that the stream id is set before the promise is completed:

// Notify the stream that it has been created.
stream.id(streamId);
promise.setSuccess();

But this does still mean that the cancel() optimization is unnecessary and a bit misleading.

@ejona86 ejona86 self-assigned this Jan 30, 2015
@ejona86 ejona86 added code health and removed bug labels Jan 30, 2015
@lock lock bot locked as resolved and limited conversation to collaborators Sep 23, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant