Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue #8025. Ignoring HEADER and DATA frames for streams that may hav… #8028

Closed
wants to merge 0 commits into from

Conversation

shorea
Copy link
Contributor

@shorea shorea commented Jun 16, 2018

…e existed in the past.

Motivation:

Correctly ignore frames that come in after a RST_STREAM is sent.

Modification:

Ignoring frames that are for a stream that may have existed in the past instead of throwing an exception.

Result:

Fixes #8025.

when(connection.streamMayHaveExisted(STREAM_ID)).thenReturn(false);
when(connection.stream(STREAM_ID)).thenReturn(null);
final ByteBuf data = dummyData();
decode().onDataRead(ctx, STREAM_ID, data, 0, true);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shorea do we need to call data.release() in a finally block here ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shorea ping me once you addressed it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@normanmaurer just pushed a fix for this.

@normanmaurer
Copy link
Member

@normanmaurer
Copy link
Member

@ejona86 does this look ok to you (except the missing release call in the test) ?

@@ -587,8 +600,21 @@ public void pushPromiseReadAfterGoAwayShouldAllowFramesForStreamCreatedByLocalEn
verify(listener).onPushPromiseRead(eq(ctx), anyInt(), anyInt(), any(Http2Headers.class), anyInt());
}

@Test
public void pushPromiseReadForUnknownStreamThatMayHaveExistedShouldBeIgnored() throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should be ignoring these, we should be resetting. The spec states (bottom of the closed state description):

An endpoint might receive a PUSH_PROMISE frame after it sends RST_STREAM. PUSH_PROMISE causes a stream to become "reserved" even if the associated stream has been reset. Therefore, a RST_STREAM is needed to close an unwanted promised stream.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bryce-anderson I agree but I also don't know how to fix this. In https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java#L766 we require the parent stream to be non-null. Do you have an idea in mind in how to make this compliant with the spec?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to use shouldIgnoreHeadersOrDataFrame for a PUSH_PROMISE? From a brief read it doesn't seem appropriate since it's using the parent streams ID in the streamCreatedAfterGoAwaySent(streamId) method but we should be considering the promised stream id as the GOAWAY frame we would have sent carries the last peer-initiated stream id to be processed.

To fix the behavior can we do the following: ignore the frame if streamCreatedAfterGoAwaySent(promisedStreamId) == true, otherwise send a RST_STREAM if the parent stream is null, otherwise continue on with the existing code path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, running the tests now.

Copy link
Contributor

@bryce-anderson bryce-anderson Jun 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I suppose we should check if the parent stream ever could have existed and if the promised stream does get sent a reset, we need to make sure we 'observe' the promised stream id via incrementExpectedStreamId(streamId); which unfortunately is in the DefaultHttp2ConnectionreservePushStream method. A lot of protocol checks in there apply to the promise frame even if the parent has already been reset. Maybe the right thing to do is throw a streamError in there instead of a protocol error if the parent stream doesn't exist (which if it's valid would be because it's reset)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bryce-anderson let me know if the latest commit is in line with what you were thinking. I wasn't 100% sure what affect throwing a stream error has but it seems to eventually send a RST_STREAM. is that accurate?

https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L704

Copy link
Contributor

@bryce-anderson bryce-anderson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Just the one comment, which I'm not 100% sure about and I think is a preexisting bug anyway. It would be good to get someone else to double check the PUSH_PROMISE changes. @normanmaurer, @ejona86, @mosesn.

case HALF_CLOSED_LOCAL:
// Allowed to receive push promise in these states.
break;
default:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the stream was reset, we could be in the CLOSED state, and this will interpret it as a connection error. I don't know if we'll still have a Http2Stream at that point though, so it may not be possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure it's possible either, per my understanding of the code we remove the stream from the connection when we send the RST_STREAM

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mosesn @ejona86 can you please have a final look ....

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general streams are removed from the streamMap when the state transitions to close. However to avoid re-entry/sequencing issues we may delay the removal from the streamMap until the stack unrolls during iteration. Unless someone is queuing frames up during iteration we are unlikely to run into this issue, but it can't hurt to accommodate for this scenario just like we do in onHeadersRead.

A somewhat related note is that we may retain non-active stream objects for a longer duration to retain priority information (see WeightedFairQueueByteDistributor). However this is independent from the streamMap on the Http2Connection.

@shorea
Copy link
Contributor Author

shorea commented Jun 20, 2018

@normanmaurer can this be merged and released?

Copy link
Member

@carl-mastrangelo carl-mastrangelo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

verifyNoMoreInteractions(localFlow);
decode().onDataRead(ctx, STREAM_ID, data, padding, true);
verify(localFlow)
.receiveFlowControlledFrame(eq((Http2Stream) null), eq(data), eq(padding), eq(true));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optional: if every arg is eq, you can remove the wrapping of each of them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call, will simplify

verify(listener, never()).onDataRead(eq(ctx), anyInt(), any(ByteBuf.class), anyInt(), anyBoolean());
}
// Verify that the event was absorbed and not propagated to the observer.
verify(listener, never()).onDataRead(eq(ctx), anyInt(), any(ByteBuf.class), anyInt(), anyBoolean());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be for any ctx?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

case HALF_CLOSED_LOCAL:
// Allowed to receive push promise in these states.
break;
default:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general streams are removed from the streamMap when the state transitions to close. However to avoid re-entry/sequencing issues we may delay the removal from the streamMap until the stack unrolls during iteration. Unless someone is queuing frames up during iteration we are unlikely to run into this issue, but it can't hurt to accommodate for this scenario just like we do in onHeadersRead.

A somewhat related note is that we may retain non-active stream objects for a longer duration to retain priority information (see WeightedFairQueueByteDistributor). However this is independent from the streamMap on the Http2Connection.

// sent). We don't have enough information to know for sure, so we choose the lesser of the two errors.
throw streamError(streamId, STREAM_CLOSED, "Received %s frame for an unknown stream %d",
frameName, streamId);
// If the stream could have existed in the past we assume this is a frame sent by the remote
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be sure we stay in sync with the remote peer we can always throw a streamError to inform the remote peer that we don't understand the current frame. This will help provide visibly in the event the remote/local get out of sync for some reason. For example the spec recommends it for the PUSH_PROMISE case and in general allows multiple RST_STREAM frames in extraneous scenarios.

An endpoint might receive a PUSH_PROMISE frame after it sends
RST_STREAM. PUSH_PROMISE causes a stream to become "reserved"
even if the associated stream has been reset. Therefore, a
RST_STREAM is needed to close an unwanted promised stream.

However, an endpoint MAY send additional RST_STREAM
frames if it receives frames on a closed stream after more than a
round-trip time. This behavior is permitted to deal with misbehaving
implementations.

Some considerations:

  • there should be some limit to how many RST_STREAMs we are willing to send in this scenario before either going into graceful close or eventually giving up and hard closing. For example we don't want to indefinitely send RST_STREAMs. However this heuristic can be implemented in a followup PR and doesn't change the existing behavior.
  • The spec says the following, but in this case we have discarded the local state and don't know if we have already sent a RST_STREAM at this time ... which the spec allows sending multiple RST_STREAMs (as indicated above).

An endpoint MUST ignore frames that it
receives on closed streams after it has sent a RST_STREAM frame.
An endpoint MAY choose to limit the period over which it ignores
frames and treat frames that arrive after this time as being in
error.

if (connection.streamMayHaveExisted(streamId)) {
  throw streamError(streamId, STREAM_CLOSED, "Received %s frame for an unknown stream %d", frameName, streamId);
} else {
  throw connectionError(PROTOCOL_ERROR, "Stream %d does not exist", streamId);
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Scottmitch per your second consideration (if I'm understanding correctly), that would negate the original purpose of the PR. We want to ignore any data frames that come after the reset and not propagate failures to the stream.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at the original issue and here is what the RFC says about the scenario when we have sent a RST_STREAM frame and receive a DATA frame:

https://tools.ietf.org/html/rfc7540#section-5.1

An endpoint MUST ignore frames that it
receives on closed streams after it has sent a RST_STREAM frame.
An endpoint MAY choose to limit the period over which it ignores
frames and treat frames that arrive after this time as being in
error.

https://tools.ietf.org/html/rfc7540#section-6.1

If a DATA frame is received
whose stream is not in "open" or "half-closed (local)" state, the
recipient MUST respond with a stream error (Section 5.4.2) of type
STREAM_CLOSED.

https://tools.ietf.org/html/rfc7540#section-5.4.2

A RST_STREAM is the last frame that an endpoint can send on a stream.
The peer that sends the RST_STREAM frame MUST be prepared to receive
any frames that were sent or enqueued for sending by the remote peer.
These frames can be ignored, except where they modify connection
state (such as the state maintained for header compression
(Section 4.3) or flow control).

Normally, an endpoint SHOULD NOT send more than one RST_STREAM frame
for any stream. However, an endpoint MAY send additional RST_STREAM
frames if it receives frames on a closed stream after more than a
round-trip time. This behavior is permitted to deal with misbehaving
implementations.

To avoid looping, an endpoint MUST NOT send a RST_STREAM in response
to a RST_STREAM frame.

Here is my interpretation of the above somewhat contradictory statements:

  1. If state transitions in a coordinated fashion between peers into the CLOSED state and a DATA frame is received in the CLOSED state, then you MUST send a RST_STREAM.
  2. Sending a RST_STREAM is an uncoordinated means of forcing the stream into the closed state, and therefore you may receive frames that are already in flight. This frames can be ignored if you are reasonably confident that the remote peer has not yet received the RST_STREAM, but you are also allowed to send multiple RST_STREAMs.
  3. Receiving a RST_STREAM frame while the stream is in the CLOSED state should have no protocol impacts.

Given the above criteria my preferred way to handle the scenario is the following:

  • If the stream may have existed ... tracking/estimating RTT time is error prone and can be expensive for what is in large part a boundary condition. The RFC permits sending multiple RST_STREAM frames, doing so will have no negative implications from a protocol perspective*. A simpler approach would be to continue to send the RST_STREAM frames which provides positive acknowledgement for good peers that this frame was not accepted, and have some heuristic to "fail hard" (e.g. connection error) to catch the case of "misbehaving peers".
  • if the stream was never in existence ... protocol error
  • reduce the log level in io.grpc.netty.NettyServerHandler for this case (or consider a more selective log strategy)

(*) of course this will consume network resources but it is viewed as a tradeoff to keep things simple from state management and heuristic perspective. If this is proven to consume non-negligible network resources for some use cases I would first like to explore handling the heuristic of "not sending the RST_STREAM bcz I think this falls within the RTT of the previous frame" up into the application layer (e.g. override Http2ConnectionHandler#onStreamError) before considering baking this into Netty.

In reference to @ejona86's comment #8025 (comment) IIUC his suggestion is to ignore the receipt of RST_STREAM on a closed stream, but we already do that [1][2].

[1] https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java#L341-L342
[2] https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java#L349

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the stream may have existed ... tracking/estimating RTT time is error prone and can be expensive for what is in large part a boundary condition. The RFC permits sending multiple RST_STREAM frames, doing so will have no negative implications from a protocol perspective*. A simpler approach would be to continue to send the RST_STREAM frames which provides positive acknowledgement for good peers that this frame was not accepted, and have some heuristic to "fail hard" (e.g. connection error) to catch the case of "misbehaving peers".

How would we trigger a RST_STREAM without throwing a stream error?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Scottmitch can you take a look at my latest comment?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry was afk for a few days

How would we trigger a RST_STREAM without throwing a stream error?

throwing a StreamException in the interface based API (e.g. Http2ConnectionHandler, Http2ConnectionDecoder, Http2ConnectionEncoder, etc..) will generally result in a RST_STREAM being sent. Independent of this you can also always call writeRstStream on the Http2ConnectionDecoder.

The code I provided in the previous comment [1] #8028 (comment) should be sufficient to cover the "If the stream may have existed" and "if the stream was never in existence" case. However this does't cover the heuristic to close the connection if receiving a frame for a stream in a closed state should be a protocol error instead of a stream error, and I think we can handle that as a followup PR.

[1]

if (connection.streamMayHaveExisted(streamId)) {
  throw streamError(streamId, STREAM_CLOSED, "Received %s frame for an unknown stream %d", frameName, streamId);
} else {
  throw connectionError(PROTOCOL_ERROR, "Stream %d does not exist", streamId);
}

@@ -272,26 +273,34 @@ public void dataReadForUnknownStreamShouldApplyFlowControlAndFail() throws Excep
}
}

@Test(expected = Http2Exception.class)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be good to verify which type of exception is being thrown in this case. For example a StreamException doesn't seem appropriate in this case, but would still pass the test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

try {
decode().onDataRead(ctx, STREAM_ID, data, padding, true);
fail();
} catch (Http2Exception e) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As previously discussed it seems like sending a RST_STREAM in this scenario is still useful to inform the remote end that the data was not accepted, and to update its local state.

// sent). We don't have enough information to know for sure, so we choose the lesser of the two errors.
throw streamError(streamId, STREAM_CLOSED, "Received %s frame for an unknown stream %d",
frameName, streamId);
// If the stream could have existed in the past we assume this is a frame sent by the remote
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at the original issue and here is what the RFC says about the scenario when we have sent a RST_STREAM frame and receive a DATA frame:

https://tools.ietf.org/html/rfc7540#section-5.1

An endpoint MUST ignore frames that it
receives on closed streams after it has sent a RST_STREAM frame.
An endpoint MAY choose to limit the period over which it ignores
frames and treat frames that arrive after this time as being in
error.

https://tools.ietf.org/html/rfc7540#section-6.1

If a DATA frame is received
whose stream is not in "open" or "half-closed (local)" state, the
recipient MUST respond with a stream error (Section 5.4.2) of type
STREAM_CLOSED.

https://tools.ietf.org/html/rfc7540#section-5.4.2

A RST_STREAM is the last frame that an endpoint can send on a stream.
The peer that sends the RST_STREAM frame MUST be prepared to receive
any frames that were sent or enqueued for sending by the remote peer.
These frames can be ignored, except where they modify connection
state (such as the state maintained for header compression
(Section 4.3) or flow control).

Normally, an endpoint SHOULD NOT send more than one RST_STREAM frame
for any stream. However, an endpoint MAY send additional RST_STREAM
frames if it receives frames on a closed stream after more than a
round-trip time. This behavior is permitted to deal with misbehaving
implementations.

To avoid looping, an endpoint MUST NOT send a RST_STREAM in response
to a RST_STREAM frame.

Here is my interpretation of the above somewhat contradictory statements:

  1. If state transitions in a coordinated fashion between peers into the CLOSED state and a DATA frame is received in the CLOSED state, then you MUST send a RST_STREAM.
  2. Sending a RST_STREAM is an uncoordinated means of forcing the stream into the closed state, and therefore you may receive frames that are already in flight. This frames can be ignored if you are reasonably confident that the remote peer has not yet received the RST_STREAM, but you are also allowed to send multiple RST_STREAMs.
  3. Receiving a RST_STREAM frame while the stream is in the CLOSED state should have no protocol impacts.

Given the above criteria my preferred way to handle the scenario is the following:

  • If the stream may have existed ... tracking/estimating RTT time is error prone and can be expensive for what is in large part a boundary condition. The RFC permits sending multiple RST_STREAM frames, doing so will have no negative implications from a protocol perspective*. A simpler approach would be to continue to send the RST_STREAM frames which provides positive acknowledgement for good peers that this frame was not accepted, and have some heuristic to "fail hard" (e.g. connection error) to catch the case of "misbehaving peers".
  • if the stream was never in existence ... protocol error
  • reduce the log level in io.grpc.netty.NettyServerHandler for this case (or consider a more selective log strategy)

(*) of course this will consume network resources but it is viewed as a tradeoff to keep things simple from state management and heuristic perspective. If this is proven to consume non-negligible network resources for some use cases I would first like to explore handling the heuristic of "not sending the RST_STREAM bcz I think this falls within the RTT of the previous frame" up into the application layer (e.g. override Http2ConnectionHandler#onStreamError) before considering baking this into Netty.

In reference to @ejona86's comment #8025 (comment) IIUC his suggestion is to ignore the receipt of RST_STREAM on a closed stream, but we already do that [1][2].

[1] https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java#L341-L342
[2] https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java#L349

@bryce-anderson
Copy link
Contributor

I'm generally in agreement with @Scottmitch that sending a RST_STREAM is the most desirable behavior from a protocol perspective, though I do think the logging is aggressive. Can we change the log level of that exception? In general, a RST_STREAM with code STREAM_CLOSED (and some others like CANCELED) are not that interesting and probably better to default them to debug level. That said, logs could get complex to interpret if we start special casing by the error code.

I would like to keep the fixes to the PUSH_PROMISE behavior, namely that I think we're currently using the wrong stream id in shouldIgnoreHeadersOrDataFrame (we should be using the promised stream id) and we don't increment the last observed inbound id if we reset the promised stream. Of course that could be changed in a different PR.

@Scottmitch
Copy link
Member

Scottmitch commented Jun 28, 2018

though I do think the logging is aggressive. Can we change the log level of that exception?

@bryce-anderson - If log level is to high/noisy in Netty I'm +1 for reducing it. However based upon the issue description I thought the logging was being done at a higher level? (/cc @shorea)

@shorea
Copy link
Contributor Author

shorea commented Jul 5, 2018

@Scottmitch @normanmaurer would it be possible to do a netty release this week? A colleague is waiting on another change to be released, I'll have to revisit this PR per Scott's comments but we have a workaround for this particular issue.

@normanmaurer
Copy link
Member

normanmaurer commented Jul 5, 2018 via email

@shorea
Copy link
Contributor Author

shorea commented Jul 6, 2018

Thanks @normanmaurer !

@shorea
Copy link
Contributor Author

shorea commented Jul 10, 2018

@normanmaurer I forgot to follow up on this. Would it still be possible to get a release out this week?

@normanmaurer
Copy link
Member

@shorea yep... working on it as we speak.

@normanmaurer
Copy link
Member

@shorea done https://netty.io/news/2018/07/10/4-1-26-Final.html

@normanmaurer
Copy link
Member

@shorea any update ?

@shorea
Copy link
Contributor Author

shorea commented Aug 7, 2018

@normanmaurer apologies for the long delay. I'll have some time this week to revisit Scott's comments.

@normanmaurer
Copy link
Member

@shorea ok cool.... ping me once done

@goral09
Copy link

goral09 commented Aug 27, 2018

Guys, what's the update on this? It's been a while and I keep seeing this in my project. Can we push this forward? Do you need any help?

@netty-bot
Copy link

Can one of the admins verify this patch?

@Rigner
Copy link

Rigner commented Sep 28, 2018

@shorea Any news about this issue ? It has been running for months and we would like to see it resolved as it can be spamming logs for production services.

@sebtomba
Copy link

sebtomba commented Oct 8, 2018

This PR seems to be approved and checked. When can you merge it?

@normanmaurer
Copy link
Member

@shorea any update ?

@Rigner
Copy link

Rigner commented Nov 25, 2018

It has been 2 months without news of @shorea , not very professional from him.
Can't anyone else get his hands on this issue to get it finally reviewed and done ? Some of us have been waiting for too long.

@gagahjt
Copy link

gagahjt commented Nov 26, 2018

when will that version solve the problem

@normanmaurer
Copy link
Member

Let me see if I can take bring this over the finish line...

@Rigner
Copy link

Rigner commented Dec 21, 2018

@normanmaurer Any news ? Sorry to ask that often, but this should have been done ages ago and we never see any work on it, everyone is saying this is gonna get resolved soon but it hasn't been for years.
What is blocking it ?

@jamel
Copy link
Contributor

jamel commented Jan 17, 2019

Is there any chance to get this patch merged in the near future?

@EncodePanda
Copy link

Really looking forward to have this merged :)

@rubensayshi
Copy link

Is the last pending desired change the one suggested by @Scottmitch ?

It seems nobody wants to apply those changes?
I can give it a shot ... I guess rebasing will be most of the effort at this point ...

Given the above criteria my preferred way to handle the scenario is the following:

If the stream may have existed ... tracking/estimating RTT time is error prone and can be expensive for what is in large part a boundary condition. The RFC permits sending multiple RST_STREAM frames, doing so will have no negative implications from a protocol perspective*. A simpler approach would be to continue to send the RST_STREAM frames which provides positive acknowledgement for good peers that this frame was not accepted, and have some heuristic to "fail hard" (e.g. connection error) to catch the case of "misbehaving peers".
if the stream was never in existence ... protocol error
reduce the log level in io.grpc.netty.NettyServerHandler for this case (or consider a more selective log strategy)

@normanmaurer
Copy link
Member

This was re-opened as #9402 to make progress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

HTTP/2 - Race Condition with RST_STREAM