Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NettyServerHandler closes connection when first go_away is ackd regardless of starting stream #5806

Closed
high-stakes opened this issue May 30, 2019 · 4 comments

Comments

@high-stakes
Copy link

high-stakes commented May 30, 2019

Scenario:

  1. Server has a connection max age and a grace period of 60 and 120 seconds
  2. A client call/stream starts just as the max age (60 sec) is reached on the connection
  3. The client receives the first go_away as he should
  4. The client acks the go_away
  5. The server as soon as it sees the go_away ack ping sends out the second go_away and closes the connection regardless of the grace period or pending streams
  6. client closes its active stream and receives UNAVAILABLE error

What version of gRPC are you using?

1.20.0

What did you expect to see?

I expect the client to be able to finish its last stream it started before it received the first go_away request from the server. It should have grace_period time to do so.
There is a window where concurrency occurs however and a client call is able to start when the connection is about to close and the call is dropped without grace period being taken into account.

Attachments:

Client log:

{"time":"2019-05-30T09:37:25.048+00:00","level":"DEBUG","logger_name":"op.cl","thread_name":"ThreadPoolTaskExecutor-1","message":"====> User/HasAccess","c_svc":"User","c_op":"HasAccess"}
{"time":"2019-05-30T09:37:25.048+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] OUTBOUND HEADERS: streamId=191 headers=GrpcHttp2OutboundHeaders[:authority: user:50051, :path: /User/HasAccess, :method: POST, :scheme: http, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.20.0, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false"}
{"time":"2019-05-30T09:37:25.048+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] OUTBOUND DATA: streamId=191 padding=0 endStream=true length=63 bytes=..."}
{"time":"2019-05-30T09:37:25.062+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] INBOUND HEADERS: streamId=191 headers=GrpcHttp2ResponseHeaders[:status: 200, content-type: application/grpc, grpc-encoding: identity, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false"}
{"time":"2019-05-30T09:37:25.062+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] INBOUND DATA: streamId=191 padding=0 endStream=false length=139 bytes=..."}
{"time":"2019-05-30T09:37:25.062+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] INBOUND HEADERS: streamId=191 headers=GrpcHttp2ResponseHeaders[grpc-status: 0] streamDependency=0 weight=16 exclusive=false padding=0 endStream=true"}
{"time":"2019-05-30T09:37:25.063+00:00","level":"INFO","logger_name":"op.cl","thread_name":"ThreadPoolTaskExecutor-1","message":"<==== User/HasAccess","sC":"OK","dT":0.015,"c_svc":"User","c_op":"HasAccess"}
{"time":"2019-05-30T09:37:25.131+00:00","level":"DEBUG","logger_name":"op.cl","thread_name":"ThreadPoolTaskExecutor-1","message":"====> User/HasAccess","c_svc":"User","c_op":"HasAccess"}
{"time":"2019-05-30T09:37:25.135+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] INBOUND GO_AWAY: lastStreamId=2147483647 errorCode=0 length=7 bytes=6d61785f616765"}
{"time":"2019-05-30T09:37:25.135+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] INBOUND PING: ack=false bytes=40715087873"}
{"time":"2019-05-30T09:37:25.135+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] OUTBOUND PING: ack=true bytes=40715087873"}
{"time":"2019-05-30T09:37:25.136+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-7","message":"[id: 0x9c1d83c6, L:/192.168.192.23:43906 - R:user/192.168.192.18:50051] OUTBOUND SETTINGS: ack=false settings={ENABLE_PUSH=0, MAX_CONCURRENT_STREAMS=0, INITIAL_WINDOW_SIZE=1048576, MAX_HEADER_LIST_SIZE=8192}"}
{"time":"2019-05-30T09:37:25.136+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-7","message":"[id: 0x9c1d83c6, L:/192.168.192.23:43906 - R:user/192.168.192.18:50051] OUTBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=983041"}
{"time":"2019-05-30T09:37:25.139+00:00","level":"INFO","logger_name":"op.cl","thread_name":"ThreadPoolTaskExecutor-1","message":"<==== User/HasAccess","sC":"UNAVAILABLE","dT":0.007,"c_svc":"User","c_op":"HasAccess"}
{"time":"2019-05-30T09:37:25.140+00:00","level":"ERROR","logger_name":"com.grpc.user.UserGrpc$UserBlockingStub","thread_name":"ThreadPoolTaskExecutor-1","message":"Received gRPC exception from gRPC call.","stack_trace":"io.grpc.StatusRuntimeException: UNAVAILABLE: HTTP/2 error code: NO_ERROR\nReceived Goaway\nmax_age\n\...","status":"UNAVAILABLE","error":"io.grpc.StatusRuntimeException: UNAVAILABLE: HTTP/2 error code: NO_ERROR\nReceived Goaway\nmax_age"}
{"time":"2019-05-30T09:37:25.143+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-7","message":"[id: 0x9c1d83c6, L:/192.168.192.23:43906 - R:user/192.168.192.18:50051] INBOUND SETTINGS: ack=false settings={MAX_CONCURRENT_STREAMS=2147483647, INITIAL_WINDOW_SIZE=1048576, MAX_HEADER_LIST_SIZE=8192}"}
{"time":"2019-05-30T09:37:25.144+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-7","message":"[id: 0x9c1d83c6, L:/192.168.192.23:43906 - R:user/192.168.192.18:50051] OUTBOUND SETTINGS: ack=true"}
{"time":"2019-05-30T09:37:25.144+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-7","message":"[id: 0x9c1d83c6, L:/192.168.192.23:43906 - R:user/192.168.192.18:50051] INBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=983041"}
{"time":"2019-05-30T09:37:25.144+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-7","message":"[id: 0x9c1d83c6, L:/192.168.192.23:43906 - R:user/192.168.192.18:50051] INBOUND SETTINGS: ack=true"}
{"time":"2019-05-30T09:37:25.148+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"[id: 0xff6296dd, L:/192.168.192.23:43582 - R:user/192.168.192.18:50051] INBOUND GO_AWAY: lastStreamId=191 errorCode=0 length=7 bytes=6d61785f616765"}
{"time":"2019-05-30T09:37:25.148+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyClientHandler","thread_name":"grpc-default-worker-ELG-1-6","message":"Network channel is closed"}

Server logs (with tracids):

{"time":"2019-05-30T09:37:25.050+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] INBOUND HEADERS: streamId=191 headers=GrpcHttp2RequestHeaders[:path: /User/HasAccess, :authority: user:50051, :method: POST, :scheme: http, te: trailers, content-type: application/grpc, user-agent: grpc-java-netty/1.20.0, trace_id: wpcm3n6iqri1io3vdal8, span_id: raGFJuga3LuF, parent_id: thnlddhbm, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false"}
{"time":"2019-05-30T09:37:25.050+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] INBOUND DATA: streamId=191 padding=0 endStream=true length=63 bytes=..."}
{"time":"2019-05-30T09:37:25.061+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] OUTBOUND HEADERS: streamId=191 headers=GrpcHttp2OutboundHeaders[:status: 200, content-type: application/grpc, grpc-encoding: identity, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false"}
{"time":"2019-05-30T09:37:25.061+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] OUTBOUND DATA: streamId=191 padding=0 endStream=false length=139 bytes=..."}
{"time":"2019-05-30T09:37:25.061+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] OUTBOUND HEADERS: streamId=191 headers=GrpcHttp2OutboundHeaders[grpc-status: 0] streamDependency=0 weight=16 exclusive=false padding=0 endStream=true"}
{"time":"2019-05-30T09:37:25.073+00:00","level":"DEBUG","logger_name":"op.sl","thread_name":"grpc-default-executor-123","message":">>>>> User.HasAccess()","reqid":"qfx8zafmz1nuvplql3r8o","spanid":"1FbpO2WIB4iC","parid":"6p1ltl90xf","svc":"User","op":"HasAccess"}
{"time":"2019-05-30T09:37:25.080+00:00","level":"INFO","logger_name":"op.sl","thread_name":"grpc-default-executor-123","message":"<<<<< User.HasAccess() = <only included on TRACE>","dT":0.007,"status":"OK","reqid":"qfx8zafmz1nuvplql3r8o","spanid":"1FbpO2WIB4iC","parid":"6p1ltl90xf","svc":"User","op":"HasAccess"}
{"time":"2019-05-30T09:37:25.081+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-1","message":"[id: 0xf94f3506, L:/192.168.192.18:50051 - R:/192.168.192.20:52854] OUTBOUND HEADERS: streamId=15 headers=GrpcHttp2OutboundHeaders[:status: 200, content-type: application/grpc, grpc-encoding: identity, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false"}
{"time":"2019-05-30T09:37:25.081+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-1","message":"[id: 0xf94f3506, L:/192.168.192.18:50051 - R:/192.168.192.20:52854] OUTBOUND DATA: streamId=15 padding=0 endStream=false length=158 bytes=..."}
{"time":"2019-05-30T09:37:25.081+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-1","message":"[id: 0xf94f3506, L:/192.168.192.18:50051 - R:/192.168.192.20:52854] OUTBOUND HEADERS: streamId=15 headers=GrpcHttp2OutboundHeaders[grpc-status: 0] streamDependency=0 weight=16 exclusive=false padding=0 endStream=true"}
{"time":"2019-05-30T09:37:25.102+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-1","message":"[id: 0xf94f3506, L:/192.168.192.18:50051 - R:/192.168.192.20:52854] INBOUND DATA: streamId=17 padding=0 endStream=true length=43 bytes=..."}
{"time":"2019-05-30T09:37:25.114+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-1","message":"[id: 0xf94f3506, L:/192.168.192.18:50051 - R:/192.168.192.20:52854] OUTBOUND HEADERS: streamId=17 headers=GrpcHttp2OutboundHeaders[:status: 200, content-type: application/grpc, grpc-encoding: identity, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false"}
{"time":"2019-05-30T09:37:25.114+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-1","message":"[id: 0xf94f3506, L:/192.168.192.18:50051 - R:/192.168.192.20:52854] OUTBOUND DATA: streamId=17 padding=0 endStream=false length=158 bytes=..."}
{"time":"2019-05-30T09:37:25.114+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-1","message":"[id: 0xf94f3506, L:/192.168.192.18:50051 - R:/192.168.192.20:52854] OUTBOUND HEADERS: streamId=17 headers=GrpcHttp2OutboundHeaders[grpc-status: 0] streamDependency=0 weight=16 exclusive=false padding=0 endStream=true"}
{"time":"2019-05-30T09:37:25.129+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-3","message":"[id: 0xfa58265a, L:/192.168.192.18:50051 - R:/192.168.192.19:36820] INBOUND HEADERS: streamId=17 headers=GrpcHttp2RequestHeaders[:path: /User/HasAccess, :authority: user:50051, :method: POST, :scheme: http, te: trailers, content-type: application/grpc, user-agent: grpc-java-netty/1.20.0, trace_id: qfx8zafmz1nuvplql3r8o, span_id: pRgP3XwrZuF4, parent_id: 6p1ltl90xf, grpc-accept-encoding: gzip] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false"}
{"time":"2019-05-30T09:37:25.130+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-3","message":"[id: 0xfa58265a, L:/192.168.192.18:50051 - R:/192.168.192.19:36820] INBOUND DATA: streamId=17 padding=0 endStream=true length=73 bytes=..."}
{"time":"2019-05-30T09:37:25.130+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] OUTBOUND GO_AWAY: lastStreamId=2147483647 errorCode=0 length=7 bytes=6d61785f616765"}
{"time":"2019-05-30T09:37:25.130+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] OUTBOUND PING: ack=false bytes=40715087873"}
{"time":"2019-05-30T09:37:25.134+00:00","level":"DEBUG","logger_name":"op.sl","thread_name":"grpc-default-executor-123","message":">>>>> User.HasAccess()","reqid":"qfx8zafmz1nuvplql3r8o","spanid":"pRgP3XwrZuF4","parid":"6p1ltl90xf","svc":"User","op":"HasAccess"}
{"time":"2019-05-30T09:37:25.143+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] INBOUND PING: ack=true bytes=40715087873"}
{"time":"2019-05-30T09:37:25.143+00:00","level":"DEBUG","logger_name":"io.grpc.netty.NettyServerHandler","thread_name":"grpc-default-worker-ELG-1-5","message":"[id: 0xa677b58f, L:/192.168.192.18:50051 - R:/192.168.192.23:43582] OUTBOUND GO_AWAY: lastStreamId=191 errorCode=0 length=7 bytes=6d61785f616765"}
{"time":"2019-05-30T09:37:25.143+00:00","level":"INFO","logger_name":"op.sl","thread_name":"grpc-default-executor-123","message":"<<<<< User.HasAccess() = <only included on TRACE>","dT":0.008,"status":"OK","reqid":"qfx8zafmz1nuvplql3r8o","spanid":"pRgP3XwrZuF4","parid":"6p1ltl90xf","svc":"User","op":"HasAccess"}

NettyServerHandler related part:

    public void onPingAckRead(ChannelHandlerContext ctx, long data) throws Http2Exception {
      if (keepAliveManager != null) {
        keepAliveManager.onDataReceived();
      }
      if (data == flowControlPing().payload()) {
        flowControlPing().updateWindow();
        if (logger.isLoggable(Level.FINE)) {
          logger.log(Level.FINE, String.format("Window: %d",
              decoder().flowController().initialWindowSize(connection().connectionStream())));
        }
      } else if (data == GRACEFUL_SHUTDOWN_PING) {
        if (gracefulShutdown == null) {
          // this should never happen
          logger.warning("Received GRACEFUL_SHUTDOWN_PING Ack but gracefulShutdown is null");
        } else {
          gracefulShutdown.secondGoAwayAndClose(ctx);
        }
      } else if (data != KEEPALIVE_PING) {
        logger.warning("Received unexpected ping ack. No ping outstanding");
      }
    }
@high-stakes high-stakes changed the title NettyServerHandler closes connection when first go_away is ackd regardless of open stream NettyServerHandler closes connection when first go_away is ackd regardless of starting stream May 30, 2019
@carl-mastrangelo
Copy link
Contributor

Stream 191, 15, and 17 should all complete normally. I am guessing the failed RPC is not one of those, but that's due to a race between starting new RPCs on the application, and the transport notifying the LoadBalancer that the Transport is in TRANSIENT_FAILURE.

The GOAWAY frames (and PINGs) are correct. The first GOAWAY is with the max stream id to ensure that any in-flight RPCs make it in. The ping pong ensures that the client has actually seen the first GOAWAY (in order to send RPCs elsewhere). The final GOAWAY is of the last stream id that was started. Which means that no new RPCs should be started on the transport.

The grace period is how long to wait for the ping-ack server side before more aggressively sending the second goaway.

I believe the error you are seeing is due to the LoadBalancer / transport race, where it's possible RPCs get assign a transport after it has begun its shutdown. The fix for this race is for Retries to be enabled, which can look at the error code, determine that the RPC headers have not yet been send, and reschedule the RPC on a READY transport.

I'm not sure there is much else to do here, other than increase the priority of retries being turned on.

@high-stakes
Copy link
Author

high-stakes commented May 30, 2019

Thanks @carl-mastrangelo for the quick reply and clarifying the grace period ack behavior. I managed to reproduce something similar in a blank project with high load but as you said the logs do not tell the whole story (at least to me). The setup is not using a load balancer (and only a single endpoint) and the above one used a client side load balancer with round-robin strategy but also on a single endpoint during testing, I assume the same race condition can still apply? This is disconcerting for anyone using client-side loadbalancing as we rely on connection max_age being set. We cannot say that under perfect network conditions a call will succeed.

Could having a configurable delay before sending the go_away ack (but marking the connection as unavailable) to the server account for any calls coming in in this window without needing retries?
I am asking because I assume retries need a different approach and should only be used for idempotent calls or for status codes which do not result in server service execution. What I noticed however is that we also get status code UNAVAILABLE when a client call exceeds the connection grace period and gets dropped as a result. In that case the server will finish executing the service method regardless but the client receives UNAVAILABLE. So this would mean those calls also need to be idempotent.

@zhangkun83
Copy link
Contributor

I believe the error you are seeing is due to the LoadBalancer / transport race, where it's possible RPCs get assign a transport after it has begun its shutdown. The fix for this race is for Retries to be enabled, which can look at the error code, determine that the RPC headers have not yet been send, and reschedule the RPC on a READY transport.

If it's the case, this issue is a duplicate of #2562

@zhangkun83
Copy link
Contributor

Re-examined the logs from the original post. Only the serve port 43582 had the GO_AWAY, so the logs would be more clear if you only search for 43582. It appears the client ack'ed the first GO_AWAY right after stream 191 is finished, and the UNAVAILALBE error appears after that. This does look like the race described in #2562.

The setup is not using a load balancer (and only a single endpoint) and the above one used a client side load balancer with round-robin strategy but also on a single endpoint during testing, I assume the same race condition can still apply?

Yes it applies to the Channel implementation in general. A pick_first "LoadBalancer" is used if there is a single endpoint.

Could having a configurable delay before sending the go_away ack (but marking the connection as unavailable) to the server account for any calls coming in in this window without needing retries?

Per a conversation with @carl-mastrangelo, the client is obligated to ack immediately after receiving the first GO_AWAY, otherwise it would be a violation of the protocol.

I am asking because I assume retries need a different approach and should only be used for idempotent calls or for status codes which do not result in server service execution.

The transparent retry would be activated only when we are sure the client has not sent anything onto the network. It is supposed to be invisible to the user. And we think is the right approach to fix the race. To the user it should appear as if the race didn't happen.

What I noticed however is that we also get status code UNAVAILABLE when a client call exceeds the connection grace period and gets dropped as a result. In that case the server will finish executing the service method regardless but the client receives UNAVAILABLE. So this would mean those calls also need to be idempotent.

The server does cancel the RPC when the grace period is exceeded. The application handler should receive the notification from StreamObserver.onError() (available only with client-streaming), or from the fact that the current Context is cancelled, which can be received through registering a listener by calling Context.current().addListener(). gRPC doesn't interrupt the application thread.

I am closing this issue in favor of #2562. Please feel free to reopen if I misinterpreted it.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 30, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants