Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/net/http2: support memory re-use for MetaHeadersFrames #22717

dfawley opened this issue Nov 14, 2017 · 6 comments

x/net/http2: support memory re-use for MetaHeadersFrames #22717

dfawley opened this issue Nov 14, 2017 · 6 comments
NeedsInvestigation Performance


Copy link

@dfawley dfawley commented Nov 14, 2017

As pointed out in grpc/grpc-go#1587 (comment) (and in cockroachdb/cockroach#17370), re-using MetaHeadersFrame memory similarly to DataFrames has the potential to increase performance significantly (~50% throughput increase), even where the header frame data is quite small (10s of bytes). The http2 library should ideally support this.

Copy link

@odeke-em odeke-em commented Nov 14, 2017

/cc @bradfitz @tombergan @rs

@odeke-em odeke-em added this to the Unplanned milestone Nov 14, 2017
Copy link

@tombergan tombergan commented Nov 22, 2017

@dfawley Do you have a benchmark you can share? Ideally something we could copy into x/net/http2.

Also: it's not clear from the grpc issue if this was a microbenchmark or a more realistic benchmark.

Copy link

@Zeymo Zeymo commented Nov 22, 2017


The http2 library should ideally support this

we build gateway base on x/net/http2, when reuse MetaHeaderFrame (especially mh.Fields) and HeaderFrame ,gain ~10% qps benifits and reduce ~3% pause time

ps:we have 7~10 header filed per stream

@ALTree ALTree added NeedsDecision NeedsInvestigation and removed NeedsDecision labels Mar 3, 2019
Copy link

@y3llowcake y3llowcake commented Jan 9, 2020

I have a relatively simple production go-grpc server workload that does near-zero heap allocation in the request handler. As qps increases, p99 client response times start increasing, and line up with an increase of gc durations on the server.

The alloc_space profile shows ~30% of space down meta frame parsing call paths. See attached image. Happy to provide more debug info.


Copy link

@detailyang detailyang commented Mar 26, 2020

It looks like now the http2 design is hard to reuse memory :(

go tool pprof -alloc_space mem.out                                                                                                                                 0 <  < 10:39:23
Type: alloc_space
Time: Mar 26, 2020 at 10:38am (CST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top10
Showing nodes accounting for 464.16MB, 67.00% of 692.83MB total
Dropped 47 nodes (cum <= 3.46MB)
Showing top 10 nodes out of 92
      flat  flat%   sum%        cum   cum%
  110.02MB 15.88% 15.88%   110.02MB 15.88%*Framer).readMetaFrame.func1
   70.57MB 10.19% 26.07%    70.57MB 10.19%
   47.01MB  6.79% 32.85%    47.01MB  6.79%
   40.01MB  5.78% 38.63%    40.01MB  5.78%  net/textproto.MIMEHeader.Add (inline)
   37.51MB  5.41% 44.04%    37.51MB  5.41%  net/textproto.MIMEHeader.Set
   37.51MB  5.41% 49.45%    37.51MB  5.41%*clientConnReadLoop).handleResponse
   34.01MB  4.91% 54.36%    93.02MB 13.43%*serverConn).newWriterAndRequestNoBody
   33.51MB  4.84% 59.20%    33.51MB  4.84%*ClientConn).newStream
   27.01MB  3.90% 63.10%    36.51MB  5.27%  net/http.(*Request).WithContext (inline)
      27MB  3.90% 67.00%   160.04MB 23.10%*serverConn).newWriterAndRequest

Copy link

@easwars easwars commented May 17, 2022

grpc/grpc-go#3305 is blocked on this issue. Could you please share an update on this? Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
NeedsInvestigation Performance
None yet

No branches or pull requests

9 participants