-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: accept protobuf responses where possible from kube-api (#18603) #18602
base: master
Are you sure you want to change the base?
Conversation
b3a9a17
to
81b6846
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #18602 +/- ##
==========================================
- Coverage 50.36% 50.34% -0.02%
==========================================
Files 315 315
Lines 43190 43219 +29
==========================================
+ Hits 21752 21758 +6
- Misses 18954 18975 +21
- Partials 2484 2486 +2 ☔ View full report in Codecov by Sentry. |
f838bf9
to
9c5f234
Compare
I wonder how big of an improvement this is, @cnmcavoy. Do you have any relevant numbers please? |
This is the result I got when experimenting on a 15k pods cluster before giving the advise to tweak the content type to proto. Expect the diff would grow as least linearly when the number of pods grows.
|
Tested against an internal instance, and I'm not seeing a significant difference in resource usage (memory, cpu, network). I think part of the problem is that the PR only sets content types on some of the clients, mostly the ones that are used to get resources the component immediately needs from the local k8s API (i.e. things like secrets and configmaps). The heavy lifting of the application controller is done with separately-initialized k8s clients in the cluster cache code. I tried to set that to use protobuf too here, but am so far not seeing significant wins: b98df97 |
e0162be
to
8eb4284
Compare
I updated the PR to ensure that the protobuf headers get set in the other places and also tried to wire up the user agents correctly. The cluster cache and several controllers were defaulting to the stock user agent, which made it hard to trace who was making the k8s api calls. |
Signed-off-by: Cameron McAvoy <cmcavoy@indeed.com>
Signed-off-by: Cameron McAvoy <cmcavoy@indeed.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
8eb4284
to
4a3615b
Compare
…ults Signed-off-by: Cameron McAvoy <cmcavoy@indeed.com>
4a3615b
to
5a30eb7
Compare
Checklist:
Fixes: #18603