Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement non-caching, per-kustomization GC-client/statusPoller for cross-cluster kubeconfigs #135

Merged
merged 2 commits into from
Oct 16, 2020

Conversation

stealthybox
Copy link
Member

@stealthybox stealthybox commented Oct 7, 2020

Fixes #127

First shot at this after familiarizing myself with all of the clientcmd and restclient types/methods.
Pending extensive testing /w CAPI.

Clients are re-created from the KubeConfig SecretRef for each reconciliation of a particular Kustomization.
( kubeconfigs such as the ones used for CAPA managed EKS clusters are regularly refreshed behind the scenes. )

I chose not to create a cache for these clients since they only survive a single reconciliation.
We could instead maintain a map of NamespacedNames to restClients in the KustomizationReconciler.
This might be worthwhile and fairly simple -- let me know what you think.

Currently, when not specifying a KubeConfig, we return the Reconciler's general client and statusPoller.
I expect to change this for security reasons in the future Impersonation patches, since HealthChecking
could be a form of cross-tenant information disclosure, and it could be possible to trick the Garbage Collector
into deleting things you'd otherwise not have access to.

I wonder how we can e2e test this?

@stealthybox stealthybox marked this pull request as draft October 7, 2020 08:00
@stefanprodan
Copy link
Member

I chose not to create a cache for these clients since they only survive a single reconciliation.

I don't think we should be doing any caching, re-creating the KubeConfig for each reconciliation is 💯

I expect to change this for security reasons in the future Impersonation patches, since HealthChecking
could be a form of cross-tenant information disclosure, and it could be possible to trick the Garbage Collector
into deleting things you'd otherwise not have access to.

Yes I think we should address this when we refactor the Impersonation.

@stefanprodan
Copy link
Member

stefanprodan commented Oct 7, 2020

@stealthybox can you please confirm that this works with CAPI (GC+HealthChecking)? If so I would merge this without e2e tests and figure out later how to spin un a CAPI Kind provider in GitHub Actions. I want to get this out so that other people could test this with other CAPI providers. Before releasing this, we need to add a section to the API docs that explains how the KubeConfig field works and how it can be used from a CAPI management cluster.

@stealthybox stealthybox marked this pull request as ready for review October 15, 2020 00:43
@stealthybox stealthybox force-pushed the gc-healthcheck-kubeconfig branch 2 times, most recently from 0dc660a to e33e7a4 Compare October 15, 2020 01:37
@stealthybox
Copy link
Member Author

stealthybox commented Oct 15, 2020

I've verified that Apply, Prune/GC, HealthCheck, and Delete are all working reliably with spec.kubeConfig on a CAPI child-cluster. 🎉

I've added some rather verbose docs for these fields, inline with the API, along with a new section to the documentation.

Previous commit failed for some reason related to the CRD not being installed -- hopefully this one is green

@stealthybox stealthybox force-pushed the gc-healthcheck-kubeconfig branch 3 times, most recently from 38bf551 to dcb7c76 Compare October 15, 2020 04:46
Copy link
Member

@stefanprodan stefanprodan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Thanks @stealthybox 🏅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Enable pruning for remote clusters
2 participants