Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OkHttpChannelBuilder.flowControlWindow(int) isn't working #6685

Closed
chrisschek opened this issue Feb 6, 2020 · 8 comments
Closed

OkHttpChannelBuilder.flowControlWindow(int) isn't working #6685

chrisschek opened this issue Feb 6, 2020 · 8 comments
Labels
Milestone

Comments

@chrisschek
Copy link
Contributor

What version of gRPC-Java are you using?

Tried this with all of:

  • 1.24.1
  • 1.26.0
  • 1.27.0

What is your environment?

Both:

  • running locally on MacOSX 10.14.4
  • running in Kubernetes - Ubuntu 18 image on Ubuntu 18 workers

JDK: Zulu, java 8

What did you expect to see?

During server-side streaming RPC's, increasing the client's flow control window size should improve the rate at which the client can receive messages (particularly when the connection has some latency).

What did you see instead?

When using the OkHttp client specifically, the above expectation holds true as long as the flow control window size is somewhere between its default value and double the default value – the rate at which the client can receive messages scales linearly with the size of the flow control window. However, going even a single byte above that range results in the client receiving only a few messages, then stops receiving messages entirely. Even when the RPC stops due to deadline and a new streaming RPC is started over the same connection, no more messages are sent.

Steps to reproduce the bug

Though we first encountered this in our production environment, we've been able to recreate it in a controlled test environment like this:

  • a client that initiates the server-side stream and immediately discards messages as they're received
  • a server that produces a stream of uniformly-sized messages (~100 bytes) consisting of a random ByteString wrapped in a protocol buffer, sending them as fast as possible
  • a proxy sitting between the two that adds latency to all network communication (we're using toxiproxy)
  • all three components run in separate processes on the same machine (just a developer's workstation)

Here's our protos:

service TestService {
    rpc TestStream (StreamRequest) returns (stream StreamItem) {}
}

message StreamRequest {
    // technically have some stuff in here, but leave everything unset for this test
}

message StreamItem {
    bytes random_bytes = 1;
    // some other fields that aren't set for this test
}

Here's how the OkHttp channel is created-

ManagedChannel channel = OkHttpChannelBuilder.forTarget("localhost:9801")
                    .usePlaintext()
                    .flowControlWindow(flowControlWindowBytes)
                    .build();

Experiment results:

Latency Flow Control Window Size Message Send Rate
<1ms 65535 (default) 200k msg/s
100ms 65535 (default) 2k msg/s
100ms 131070 (2*default) 4k msg/s
<1ms anything >131070 sends ~500 messages, then stalls forever
100ms anything >131070 sends ~500 messages, then stalls forever

Other observations:

  • Enabling as much logging as possible (eg. looking at http2 frame logs) doesn't show anything useful, the server sends data until it just stops and there aren't any logs at that point
  • As shown in the above table, the drop in send rate from adding 100ms is surprisingly drastic (99% slower)
  • We repeated the same experiments with Netty. Netty is significantly slower in the ideal case (40k msg/s compared to 200k msg/s) but doesn't experience nearly the same drop due to high latency (20k msg/s). We can also increase the flowControlWindow past 2*default without detrimental effects.
  • We found some mention of BDP in this blog post (yes, I know this is for grpc-go). From a quick code search, it appears as though BDP is implemented in the grpc-java netty client code, but not in okhttp. Can't tell if this is actually related to the problems we're seeing though.
@chrisschek chrisschek added the bug label Feb 6, 2020
@ejona86
Copy link
Member

ejona86 commented Feb 10, 2020

OS X and Kubernetes. We would encourage using Netty on those platforms. I assume you aren't because of the performance. ⅒th the performance is very surprising. That needs to be fixed. Would you mind filing a bug about OkHttp being 10x as fast as Netty?

On the OkHttp side of things: I expect this is a flow-control accounting bug. Those are pretty annoying to track down... At ~100 bytes and ~500 messages... that probably stalls after 64 KB of data, which should be easier to pinpoint.

As shown in the above table, the drop in send rate from adding 100ms is surprisingly drastic (99% slower)

Networks where OkHttp tends to live just need 64 KB. The BDP changed from ~64KB to ~6.4MB. 6.4 MB is very high, and really only seen with expensive cross-continental connections. Using a window of 64 KB on a 6.4 MB BDP-link will slow you down to 64KB/6.4MB = 1% of what is available.

As you mentioned we do know (this doc is referenced in the grpc-go blog post; I wrote it) how to auto-tune based on BDP, but we've not even turned that on for Netty. It is implemented (and was actually implemented before any other language), but not enabled. Netty defaults to 1MB window (because of common cloud datacenter networks need 1MB) which has apparently been fine for people (apparently people don't care about extra memory use in Java... 😢).

@chrisschek
Copy link
Contributor Author

Your assumption is correct, we switched to OkHttp for its speed. Point of clarification–we benchmarked it as 5x faster rather than 10x (200k msg/s for OkHttp vs. 40k msg/s for Netty). Opened issue #6696 for that.

Thanks for the clarification around BDP. Couldn't find any info about its Java implementation besides the code itself.

@ejona86
Copy link
Member

ejona86 commented Feb 11, 2020

I tried to reproduce this in AbstractInteropTest and failed. It is unclear why. I reproduced it by just making my own binary. It hangs between 512-1024 messages which is ~51,200 and ~102,400 bytes.

I see this and then it hangs (and changing window no longer hangs):

$ gradle installDist --include-build .. && ./build/install/examples/bin/okhttpflowcontrol
*** Android SDK is required. To avoid building Android projects, set -PskipAndroid=true

> Configure project :grpc:grpc-compiler
*** Building codegen requires Protobuf version 3.11.0
*** Please refer to https://github.com/grpc/grpc-java/blob/master/COMPILING.md#how-to-build-code-generation-plugin

Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.6.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD SUCCESSFUL in 1s
15 actionable tasks: 3 executed, 12 up-to-date
count: 1
count: 2
count: 4
count: 8
count: 16
count: 32
count: 64
count: 128
count: 256
count: 512
diff --git a/examples/build.gradle b/examples/build.gradle
index 56a80523f..ea070a027 100644
--- a/examples/build.gradle
+++ b/examples/build.gradle
@@ -34,6 +34,7 @@ dependencies {
     // examples/advanced need this for JsonFormat
     implementation "com.google.protobuf:protobuf-java-util:${protobufVersion}"
 
+    implementation "io.grpc:grpc-okhttp:${grpcVersion}"
     runtimeOnly "io.grpc:grpc-netty-shaded:${grpcVersion}"
 
     testImplementation "io.grpc:grpc-testing:${grpcVersion}"
@@ -112,6 +113,13 @@ task compressingHelloWorldClient(type: CreateStartScripts) {
     classpath = startScripts.classpath
 }
 
+task okhttpflowcontrol(type: CreateStartScripts) {
+    mainClassName = 'OkHttpFlowControl'
+    applicationName = 'okhttpflowcontrol'
+    outputDir = new File(project.buildDir, 'tmp')
+    classpath = startScripts.classpath
+}
+
 applicationDistribution.into('bin') {
     from(routeGuideServer)
     from(routeGuideClient)
@@ -120,5 +128,6 @@ applicationDistribution.into('bin') {
     from(hedgingHelloWorldClient)
     from(hedgingHelloWorldServer)
     from(compressingHelloWorldClient)
+    from(okhttpflowcontrol)
     fileMode = 0755
 }
diff --git a/examples/src/main/java/OkHttpFlowControl.java b/examples/src/main/java/OkHttpFlowControl.java
new file mode 100644
index 000000000..2f2e88b52
--- /dev/null
+++ b/examples/src/main/java/OkHttpFlowControl.java
@@ -0,0 +1,76 @@
+import io.grpc.ManagedChannel;
+import io.grpc.ManagedChannelBuilder;
+import io.grpc.Server;
+import io.grpc.ServerBuilder;
+import io.grpc.examples.manualflowcontrol.HelloReply;
+import io.grpc.examples.manualflowcontrol.HelloRequest;
+import io.grpc.examples.manualflowcontrol.StreamingGreeterGrpc;
+import io.grpc.okhttp.OkHttpChannelBuilder;
+import io.grpc.stub.ServerCallStreamObserver;
+import io.grpc.stub.StreamObserver;
+import java.net.InetSocketAddress;
+
+public final class OkHttpFlowControl {
+  public static void main(String[] args) throws Exception {
+    Server server = ServerBuilder.forPort(0)
+        .addService(new ForeverGreeting())
+        .build()
+        .start();
+    InetSocketAddress address = (InetSocketAddress) server.getListenSockets().get(0);
+    ManagedChannel channel = OkHttpChannelBuilder
+        .forAddress(address.getAddress().getHostAddress(), address.getPort())
+        .usePlaintext()
+        .flowControlWindow(131070 + 100)
+        .build();
+    StreamingGreeterGrpc.newStub(channel)
+        .sayHelloStreaming(new StreamObserver<HelloReply>() {
+          int count;
+          @Override public void onNext(HelloReply reply) {
+            count++;
+            if (Integer.bitCount(count) == 1) {
+              System.out.println("count: " + count);
+            }
+          }
+          @Override public void onCompleted() {}
+          @Override public void onError(Throwable t) {}
+        })
+        .onCompleted();
+    while (true) {
+      Thread.sleep(1000000);
+    }
+  }
+
+  static class ForeverGreeting extends StreamingGreeterGrpc.StreamingGreeterImplBase {
+    private static final HelloReply REPLY
+        = HelloReply.newBuilder().setMessage(String.format("%100s", "")).build();
+
+    @Override
+    public StreamObserver<HelloRequest> sayHelloStreaming(
+        final StreamObserver<HelloReply> responseObserverGeneric) {
+      final ServerCallStreamObserver<HelloReply> responseObserver =
+          (ServerCallStreamObserver<HelloReply>) responseObserverGeneric;
+      class Stream implements StreamObserver<HelloRequest> {
+        @Override public void onNext(HelloRequest request) {}
+        @Override public void onCompleted() {}
+        @Override public void onError(Throwable t) {}
+
+        public void onReady() {
+          while (!responseObserver.isCancelled() && responseObserver.isReady()) {
+            responseObserver.onNext(REPLY);
+          }
+        }
+      }
+      final Stream stream = new Stream();
+      responseObserver.setOnCancelHandler(new Runnable() {
+        @Override public void run() {}
+      });
+      responseObserver.setOnReadyHandler(new Runnable() {
+        @Override public void run() {
+          stream.onReady();
+        }
+      });
+      stream.onReady();
+      return stream;
+    }
+  }
+}

@ejona86
Copy link
Member

ejona86 commented Feb 11, 2020

This was really basic. We weren't informing the remote that we used different settings, so they were always using 64KB (so wouldn't send enough to be worth us sending a window update).

Whelp. I guess that makes it clear nobody has configured the window option... Also, I see some suspicious behavior where the receive-side window size is being passed to the send-side OutboundFlowController. That seems likely to be broken as well (although maybe it is only broken if you change the default from 64KB? We know Netty uses 1MB...).

diff --git a/okhttp/src/main/java/io/grpc/okhttp/OkHttpClientTransport.java b/okhttp/src/main/java/io/grpc/okhttp/OkHttpClientTransport.java
index b238b9237..291431780 100644
--- a/okhttp/src/main/java/io/grpc/okhttp/OkHttpClientTransport.java
+++ b/okhttp/src/main/java/io/grpc/okhttp/OkHttpClientTransport.java
@@ -608,7 +608,10 @@ class OkHttpClientTransport implements ConnectionClientTransport, TransportExcep
       synchronized (lock) {
         frameWriter.connectionPreface();
         Settings settings = new Settings();
+        // TODO: WTH are persist and persist value??
+        settings.set(7, 0, initialWindowSize);
         frameWriter.settings(settings);
+        frameWriter.windowUpdate(0, initialWindowSize - 64*1024);
       }
     } finally {
       latch.countDown();

@ejona86
Copy link
Member

ejona86 commented Feb 11, 2020

I was able to reproduce this with AbstractTransportTest. I don't know why it wasn't hanging in my earlier attempt.

diff --git a/interop-testing/src/test/java/io/grpc/testing/integration/Http2OkHttpTest.java b/interop-testing/src/test/java/io/grpc/testing/integration/Http2OkHttpTest.java
index 927b0ed44..07578ea45 100644
--- a/interop-testing/src/test/java/io/grpc/testing/integration/Http2OkHttpTest.java
+++ b/interop-testing/src/test/java/io/grpc/testing/integration/Http2OkHttpTest.java
@@ -95,6 +95,7 @@ public class Http2OkHttpTest extends AbstractInteropTest {
     int port = ((InetSocketAddress) getListenAddress()).getPort();
     OkHttpChannelBuilder builder = OkHttpChannelBuilder.forAddress("localhost", port)
         .maxInboundMessageSize(AbstractInteropTest.MAX_MESSAGE_SIZE)
+        .flowControlWindow(256*1024)
         .connectionSpec(new ConnectionSpec.Builder(ConnectionSpec.MODERN_TLS)
             .cipherSuites(TestUtils.preferredTestCiphers().toArray(new String[0]))
             .build())

@chrisschek
Copy link
Contributor Author

Thanks @ejona86, using that patch on OkHttpClientTransport has got things running smoother now.

Although this may have uncovered another issue: specifically when doing the same test as above, but this time connecting via a Kubernetes nginx Ingress, the following error occurs on the client:

15:29:26.794 [grpc-default-executor-0] DEBUG io.grpc.okhttp.internal.framed.Http2$FrameLogger - >> CONNECTION 505249202a20485454502f322e300d0a0d0a534d0d0a0d0a
15:29:26.801 [grpc-default-executor-0] DEBUG io.grpc.okhttp.OkHttpClientTransport - OUTBOUND SETTINGS: ack=false settings={INITIAL_WINDOW_SIZE=130710}
15:29:26.809 [grpc-default-executor-0] DEBUG io.grpc.okhttp.internal.framed.Http2$FrameLogger - >> 0x00000000     6 SETTINGS
15:29:26.809 [grpc-default-executor-0] DEBUG io.grpc.okhttp.OkHttpClientTransport - OUTBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=65174
15:29:26.809 [grpc-default-executor-0] DEBUG io.grpc.okhttp.internal.framed.Http2$FrameLogger - >> 0x00000000     4 WINDOW_UPDATE
15:29:27.222 [OkHttpClientTransport] DEBUG io.grpc.okhttp.internal.framed.Http2$FrameLogger - << 0x00000000    18 SETTINGS
15:29:27.222 [OkHttpClientTransport] DEBUG io.grpc.okhttp.OkHttpClientTransport - INBOUND SETTINGS: ack=false settings={MAX_CONCURRENT_STREAMS=128, MAX_FRAME_SIZE=16777215, INITIAL_WINDOW_SIZE=65536}
15:29:27.224 [OkHttpClientTransport] DEBUG io.grpc.okhttp.OkHttpClientTransport - OUTBOUND SETTINGS: ack=true
15:29:27.224 [OkHttpClientTransport] DEBUG io.grpc.okhttp.internal.framed.Http2$FrameLogger - >> 0x00000000     0 SETTINGS      ACK
15:29:27.225 [OkHttpClientTransport] DEBUG io.grpc.okhttp.internal.framed.Http2$FrameLogger - << 0x00000000     4 WINDOW_UPDATE
15:29:27.225 [OkHttpClientTransport] DEBUG io.grpc.okhttp.OkHttpClientTransport - INBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=2147418112
15:29:27.226 [OkHttpClientTransport] DEBUG io.grpc.okhttp.OkHttpClientTransport - OUTBOUND GO_AWAY: lastStreamId=0 errorCode=PROTOCOL_ERROR length=0 bytes=
15:29:27.226 [OkHttpClientTransport] DEBUG io.grpc.okhttp.internal.framed.Http2$FrameLogger - >> 0x00000000     8 GOAWAY
15:29:27.323 [myapp-thread] ERROR com.myapp.MyApplication - GRPC Exception: Status{code=INTERNAL, description=error in frame handler, cause=java.lang.IllegalArgumentException: Window size overflow for stream: 0
    at io.grpc.okhttp.OutboundFlowController$OutboundFlowState.incrementStreamWindow(OutboundFlowController.java:263)
    at io.grpc.okhttp.OutboundFlowController.windowUpdate(OutboundFlowController.java:88)
    at io.grpc.okhttp.OkHttpClientTransport$ClientFrameHandler.windowUpdate(OkHttpClientTransport.java:1352)
    at io.grpc.okhttp.internal.framed.Http2$Reader.readWindowUpdate(Http2.java:365)
    at io.grpc.okhttp.internal.framed.Http2$Reader.nextFrame(Http2.java:178)
    at io.grpc.okhttp.OkHttpClientTransport$ClientFrameHandler.run(OkHttpClientTransport.java:1084)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)
}

On the surface, the "window size overflow" looks correct:

inbound INITIAL_WINDOW_SIZE + WINDOW_UPDATE = 65536 + 2147418112 = Integer.MAX_VALUE + 1

What doesn't make sense to me is that when repeating the same setup but with Netty, there's no problem. It looks like both Netty and OkHttp are both doing the same "window size overflow" checks too, but the Netty client doesn't complain about the flow control window getting too large.


FINEST: [Subchannel<5>: (myapp.k8s.nginx-ingress.url:443)] Subchannel for [[[myapp.k8s.nginx-ingress.url/12.345.678.900:443]/{}]] created
FINEST: [Subchannel<5>: (myapp.k8s.nginx-ingress.url:443)] CONNECTING as requested
FINEST: [Subchannel<5>: (myapp.k8s.nginx-ingress.url:443)] Started transport NettyClientTransport<8>: (myapp.k8s.nginx-ingress.url/12.345.678.900:443)
FINEST: [NettyClientTransport<8>: (myapp.k8s.nginx-ingress.url/12.345.678.900:443)] WaitUntilActive started
FINEST: [Channel<1>: (myapp.k8s.nginx-ingress.url:443)] Entering CONNECTING state
FINEST: [NettyClientTransport<8>: (myapp.k8s.nginx-ingress.url/12.345.678.900:443)] WaitUntilActive completed
FINEST: [NettyClientTransport<8>: (myapp.k8s.nginx-ingress.url/12.345.678.900:443)] ClientTls started
FINEST: [NettyClientTransport<8>: (myapp.k8s.nginx-ingress.url/12.345.678.900:443)] ClientTls completed
14:54:50.240 [grpc-nio-worker-ELG-1-5] DEBUG io.grpc.netty.NettyClientHandler - [id: 0x7059bd99, L:/10.59.248.131:53920 - R:myapp.k8s.nginx-ingress.url/12.345.678.900:443] OUTBOUND SETTINGS: ack=false settings={ENABLE_PUSH=0, MAX_CONCURRENT_STREAMS=0, INITIAL_WINDOW_SIZE=130710, MAX_HEADER_LIST_SIZE=8192}
14:54:50.262 [grpc-nio-worker-ELG-1-5] DEBUG io.grpc.netty.NettyClientHandler - [id: 0x7059bd99, L:/10.59.248.131:53920 - R:myapp.k8s.nginx-ingress.url/12.345.678.900:443] INBOUND SETTINGS: ack=false settings={MAX_CONCURRENT_STREAMS=128, INITIAL_WINDOW_SIZE=65536, MAX_FRAME_SIZE=16777215}
FINEST: [Subchannel<5>: (myapp.k8s.nginx-ingress.url:443)] READY
14:54:50.280 [grpc-nio-worker-ELG-1-5] DEBUG io.grpc.netty.NettyClientHandler - [id: 0x7059bd99, L:/10.59.248.131:53920 - R:myapp.k8s.nginx-ingress.url/12.345.678.900:443] INBOUND WINDOW_UPDATE: streamId=0 windowSizeIncrement=2147418112
FINEST: [Channel<1>: (myapp.k8s.nginx-ingress.url:443)] Entering READY state
14:54:50.325 [grpc-nio-worker-ELG-1-5] DEBUG io.grpc.netty.NettyClientHandler - [id: 0x7059bd99, L:/10.59.248.131:53920 - R:myapp.k8s.nginx-ingress.url/12.345.678.900:443] INBOUND SETTINGS: ack=true
14:54:50.402 [grpc-nio-worker-ELG-1-5] DEBUG io.grpc.netty.NettyClientHandler - [id: 0x7059bd99, L:/10.59.248.131:53920 - R:myapp.k8s.nginx-ingress.url/12.345.678.900:443] OUTBOUND HEADERS: streamId=3 headers=GrpcHttp2OutboundHeaders[:authority: myapp.k8s.nginx-ingress.url:443, :path: /com.myapp.TestService/TestStream, :method: POST, :scheme: https, content-type: application/grpc, te: trailers, user-agent: grpc-java-netty/1.24.3-SNAPSHOT, grpc-accept-encoding: gzip, grpc-trace-bin: AABNi/csLTZg0xJOYgi7t2qOAeDhMIhQbRIuAgA, grpc-timeout: 73691234u] streamDependency=0 weight=16 exclusive=false padding=0 endStream=false
14:54:50.421 [grpc-nio-worker-ELG-1-5] DEBUG io.grpc.netty.NettyClientHandler - [id: 0x7059bd99, L:/10.59.248.131:53920 - R:myapp.k8s.nginx-ingress.url/12.345.678.900:443] OUTBOUND DATA: streamId=3 padding=0 endStream=true length=16 bytes=000000000b0896958c86842e12021200
14:54:50.514 [grpc-nio-worker-ELG-1-5] DEBUG io.grpc.netty.NettyClientHandler - [id: 0x7059bd99, L:/10.59.248.131:53920 - R:myapp.k8s.nginx-ingress.url/12.345.678.900:443] INBOUND HEADERS: streamId=3 headers=GrpcHttp2ResponseHeaders[:status: 200, server: openresty/1.15.8.2, date: Thu, 13 Feb 2020 22:54:50 GMT, content-type: application/grpc, strict-transport-security: max-age=15724800; includeSubDomains, grpc-encoding: identity, grpc-accept-encoding: gzip] padding=0 endStream=false
14:54:50.522 [grpc-nio-worker-ELG-1-5] DEBUG io.grpc.netty.NettyClientHandler - [id: 0x7059bd99, L:/10.59.248.131:53920 - R:myapp.k8s.nginx-ingress.url/12.345.678.900:443] INBOUND DATA: streamId=3 padding=0 endStream=false length=116 bytes=000000006f08dda08c86842e180120282a600a0708dda08c86842e122b0a1470726f746f5f706c6163656d656e745f696e666f12133732333930333432323834...
// Many more DATA frames...

FYI I tried to clean out the logs before pasting them here since there were multiple subchannels showing repetitive logs, so it's possible I may have mucked something up on accident.

@chrisschek
Copy link
Contributor Author

ejona86:
Also, I see some suspicious behavior where the receive-side window size is being passed to the send-side OutboundFlowController. That seems likely to be broken as well

Alright, ran it through a debugger and it looks like that's exactly what's happening in my previous comment. The client ignores the INITIAL_WINDOW_SIZE in the SETTINGS frame and instead uses the inbound window size as its starting value. Since the particular server I was using tries to max out the flow control window, the "window size overflow" exception is thrown.

@chrisschek
Copy link
Contributor Author

Took the liberty of drafting a PR with what I believe is the correct behavior

@ejona86 ejona86 modified the milestones: Next, 1.28 Feb 27, 2020
voidzcy pushed a commit to voidzcy/grpc-java that referenced this issue Feb 27, 2020
…ning of connection

Specifically, this addresses bugs that occur when the `OkHttpChannelBuilder.flowControlWindow(int)` setting is increased from its default value.

Two changes:
1. On starting a connection, ensure the value of `OkHttpChannelBuilder.flowControlWindow(int)` is sent via Settings.INITIAL_WINDOW_SIZE. Also send a WINDOW_UPDATE after Settings to update the connection-level window.
2. Always initialize the `OutboundFlowController` with an initialWindowSize of 65335 bytes per the [http2 spec](https://http2.github.io/http2-spec/#InitialWindowSize) instead of using the inbound window size.

Fixes grpc#6685
voidzcy added a commit that referenced this issue Feb 27, 2020
…ning of connection (v1.28.x backport)

Specifically, this addresses bugs that occur when the `OkHttpChannelBuilder.flowControlWindow(int)` setting is increased from its default value.

Two changes:
1. On starting a connection, ensure the value of `OkHttpChannelBuilder.flowControlWindow(int)` is sent via Settings.INITIAL_WINDOW_SIZE. Also send a WINDOW_UPDATE after Settings to update the connection-level window.
2. Always initialize the `OutboundFlowController` with an initialWindowSize of 65335 bytes per the [http2 spec](https://http2.github.io/http2-spec/#InitialWindowSize) instead of using the inbound window size.

Fixes #6685
Backport of #6742
@lock lock bot locked as resolved and limited conversation to collaborators Jun 24, 2020
dfawley pushed a commit to dfawley/grpc-java that referenced this issue Jan 15, 2021
…ning of connection

Specifically, this addresses bugs that occur when the `OkHttpChannelBuilder.flowControlWindow(int)` setting is increased from its default value.

Two changes:
1. On starting a connection, ensure the value of `OkHttpChannelBuilder.flowControlWindow(int)` is sent via Settings.INITIAL_WINDOW_SIZE. Also send a WINDOW_UPDATE after Settings to update the connection-level window.
2. Always initialize the `OutboundFlowController` with an initialWindowSize of 65335 bytes per the [http2 spec](https://http2.github.io/http2-spec/#InitialWindowSize) instead of using the inbound window size.

Fixes grpc#6685
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
2 participants