Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/net/http2: support memory re-use for MetaHeadersFrames #22717

Open
dfawley opened this issue Nov 14, 2017 · 5 comments
Open

x/net/http2: support memory re-use for MetaHeadersFrames #22717

dfawley opened this issue Nov 14, 2017 · 5 comments

Comments

@dfawley
Copy link

@dfawley dfawley commented Nov 14, 2017

As pointed out in grpc/grpc-go#1587 (comment) (and in cockroachdb/cockroach#17370), re-using MetaHeadersFrame memory similarly to DataFrames has the potential to increase performance significantly (~50% throughput increase), even where the header frame data is quite small (10s of bytes). The http2 library should ideally support this.

@odeke-em
Copy link
Member

@odeke-em odeke-em commented Nov 14, 2017

@odeke-em odeke-em added this to the Unplanned milestone Nov 14, 2017
@tombergan
Copy link
Contributor

@tombergan tombergan commented Nov 22, 2017

@dfawley Do you have a benchmark you can share? Ideally something we could copy into x/net/http2.

Also: it's not clear from the grpc issue if this was a microbenchmark or a more realistic benchmark.

@Zeymo
Copy link

@Zeymo Zeymo commented Nov 22, 2017

+1

The http2 library should ideally support this

we build gateway base on x/net/http2, when reuse MetaHeaderFrame (especially mh.Fields) and HeaderFrame ,gain ~10% qps benifits and reduce ~3% pause time

ps:we have 7~10 header filed per stream

@y3llowcake
Copy link

@y3llowcake y3llowcake commented Jan 9, 2020

I have a relatively simple production go-grpc server workload that does near-zero heap allocation in the request handler. As qps increases, p99 client response times start increasing, and line up with an increase of gc durations on the server.

The alloc_space profile shows ~30% of space down meta frame parsing call paths. See attached image. Happy to provide more debug info.

pprof-heap-alloc-space-grpc-server

@detailyang
Copy link

@detailyang detailyang commented Mar 26, 2020

It looks like now the http2 design is hard to reuse memory :(

go tool pprof -alloc_space mem.out                                                                                                                                 0 <  < 10:39:23
Type: alloc_space
Time: Mar 26, 2020 at 10:38am (CST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top10
Showing nodes accounting for 464.16MB, 67.00% of 692.83MB total
Dropped 47 nodes (cum <= 3.46MB)
Showing top 10 nodes out of 92
      flat  flat%   sum%        cum   cum%
  110.02MB 15.88% 15.88%   110.02MB 15.88%  golang.org/x/net/http2.(*Framer).readMetaFrame.func1
   70.57MB 10.19% 26.07%    70.57MB 10.19%  golang.org/x/net/http2.glob..func1
   47.01MB  6.79% 32.85%    47.01MB  6.79%  golang.org/x/net/http2.cloneHeader
   40.01MB  5.78% 38.63%    40.01MB  5.78%  net/textproto.MIMEHeader.Add (inline)
   37.51MB  5.41% 44.04%    37.51MB  5.41%  net/textproto.MIMEHeader.Set
   37.51MB  5.41% 49.45%    37.51MB  5.41%  golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
   34.01MB  4.91% 54.36%    93.02MB 13.43%  golang.org/x/net/http2.(*serverConn).newWriterAndRequestNoBody
   33.51MB  4.84% 59.20%    33.51MB  4.84%  golang.org/x/net/http2.(*ClientConn).newStream
   27.01MB  3.90% 63.10%    36.51MB  5.27%  net/http.(*Request).WithContext (inline)
      27MB  3.90% 67.00%   160.04MB 23.10%  golang.org/x/net/http2.(*serverConn).newWriterAndRequest
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
8 participants
You can’t perform that action at this time.