New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kvstore: share etcd client logger to reduce memory usage #26485
Conversation
/test |
cbfc728
to
4c5d60f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not fond of adding an explicit dependency to the zap logger but I guess that at this point it's the only option for configuring the etcd client. Given the benefits in terms of memory allocations, I think it's worth it.
Currently, the etcd client logger is not explicitly configured when a new etcd client is initialized, which means that a new one gets created for every client. Yet, this comes with a significant memory cost, which can be avoided explicitly initializing and configuring a shared logger. This change mimics a similar one implemented for the kubernetes apiserver: kubernetes/kubernetes#111477 The following reports the top 3 nodes from the heap pprof profile of an idle process after initializing 1K etcd clients through `kvstore.NewClient`. * Without the patch: Showing nodes accounting for 268.62MB, 88.75% of 302.68MB total Dropped 124 nodes (cum <= 1.51MB) Showing top 3 nodes out of 60 flat flat% sum% cum cum% 224.33MB 74.11% 74.11% 224.33MB 74.11% go.uber.org/zap/zapcore.newCounters 32.45MB 10.72% 84.83% 32.45MB 10.72% google.golang.org/grpc/internal/transport.newBufWriter 11.85MB 3.91% 88.75% 11.85MB 3.91% bufio.NewReaderSize * With the patch: Showing nodes accounting for 65.20MB, 69.19% of 94.23MB total Showing top 3 nodes out of 137 flat flat% sum% cum cum% 43.62MB 46.29% 46.29% 43.62MB 46.29% google.golang.org/grpc/internal/transport.newBufWriter 19.08MB 20.25% 66.54% 19.08MB 20.25% bufio.NewReaderSize 2.50MB 2.65% 69.19% 3MB 3.19% time.NewTimer Co-authored-by: yougjiahe <yongjiahe@tuputech.com> Signed-off-by: Marco Iorio <marco.iorio@isovalent.com>
4c5d60f
to
e515761
Compare
Rebased onto main to pick the latest CI changes. |
/test |
Runtime Test (privileged) hit #26479. Manually rerunning. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thanks!
The PR has been approved by two reviewers (@marseel covered the kvstore-owned files), and the CI is green. Marking as ready-to-merge so that we can still include it in v1.14 without the need for backports. |
Currently, the etcd client logger is not explicitly configured when a new etcd client is initialized, which means that a new one gets created for every client. Yet, this comes with a significant memory cost, which can be avoided explicitly initializing and configuring a shared logger. This change mimics a similar one implemented for the kubernetes apiserver: kubernetes/kubernetes#111477
The following reports the top 3 nodes from the heap pprof profile of an idle process after initializing 1K etcd clients through
kvstore.NewClient
.Without the patch:
Showing nodes accounting for 268.62MB, 88.75% of 302.68MB total Dropped 124 nodes (cum <= 1.51MB)
Showing top 3 nodes out of 60
flat flat% sum% cum cum%
224.33MB 74.11% 74.11% 224.33MB 74.11% go.uber.org/zap/zapcore.newCounters
32.45MB 10.72% 84.83% 32.45MB 10.72% google.golang.org/grpc/internal/transport.newBufWriter
11.85MB 3.91% 88.75% 11.85MB 3.91% bufio.NewReaderSize
With the patch:
Showing nodes accounting for 65.20MB, 69.19% of 94.23MB total Showing top 3 nodes out of 137
flat flat% sum% cum cum%
43.62MB 46.29% 46.29% 43.62MB 46.29% google.golang.org/grpc/internal/transport.newBufWriter
19.08MB 20.25% 66.54% 19.08MB 20.25% bufio.NewReaderSize
2.50MB 2.65% 69.19% 3MB 3.19% time.NewTimer
This change was initially proposed by @WingkaiHo as part of #22263