Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The Kubernetes dynamic client relies on API discovery to know the API group and version of a given Kubernetes Kind. Until now, we've been declaring a new in memory cache client for all of our needs. This is fine for things like
Apply
as the discovery will only happen once before applying all manifests, and any manifest that gets applied will reference this in memory discovery cache. However, this becomes a problem when Spinnaker is requesting information for manifests or trying to list resources (for the infrastructure page) frequently. We see a lot of requests against the cluster take place - and very frequently. These requests are small, but I wanted our go-client to utilize the same efficiency askubectl
does with its disk cache without having to shell out.I added logs to the client-go package at https://github.com/kubernetes/client-go/blob/release-1.19/rest/request.go#L954 to log all of these discovery requests to get an idea of the overhead we're looking at.
Here's an example:
This is a request to get all load balancers, which are kinds services and ingresses in Kubernetes. Here are all the underlying requests that our kube go client is making to the server:
All but the last two requests are from the API discovery! Let's compare that to when we use a disk cache.
Since the API discovery cache is already stored on disk, we don't need to make the several requests to figure out which endpoint to call to get our services and ingresses.
A few notes:
/var/kube/cache
emptyDir
volume or persistent volume to store this cache. I think we should go with a persistent volume so that we can scale clouddriver and all instances will reference the same on disk cache.