-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
increase auth cache size #83643
increase auth cache size #83643
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lavalamp The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-kubernetes-kubemark-e2e-gce-big |
The spelling bot has you flagged, lgtm otherwise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarification on the comment may be useful.
// Cache performance degrades noticably when the number of | ||
// tokens in operation exceeds the size of the cache. It is | ||
// cheap to make the cache big in the second dimension below, | ||
// the memory is only consumed when that many tokens are being |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we consider the case of short lived SA tokens from the token request API, the cache will grow to the max size before dropping old tokens since there is no go routine or similar to purge the cache. I do not think that is immediately apparent from this comment.
That being said, even at the worse case I do not think this is an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How short-lived are such tokens? How many of them do we make?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And under what conditions do we make them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They live 10 minutes by default, and are provisioned per-pod.
52ca294
to
236112c
Compare
on my permanent record, no doubt |
/lgtm |
/retest Review the full test history for this PR. Silence the bot with an |
increase auth cache size
The benchmark in #83519 demonstrates that cache performance gets drastically worse when the number of tokens exceeds the size of the cache.
We advertise 5k nodes and 10k namespaces, it's reasonable to assume each node has a token and each namespace has a service account, which has a token.
This PR makes the cache accept 32k tokens. (It was 4k.)
/kind bug
/priority important-soon