-
Notifications
You must be signed in to change notification settings - Fork 1.5k
KEP 5339 - ClusterProfile Credentials external providers #5338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
corentone
commented
May 23, 2025
- Introduce a way to out-of-controller-tree load and use credentials when using ClusterProfile
- Issue link: N/A
- Other comments:
- see slack thread: https://kubernetes.slack.com/archives/C09R1PJR3/p1747786073248159
Hi @corentone. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
||
In order to populate the Cluster object that the exec provider requires, we standardize the following properties. | ||
|
||
Property: `multicluster.sigs.io/execprovider-cluster` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi corentone! This information would be exposed to all readers of the Cluster Profile object, which I fear might incur some security concerns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another part is that different plugins might require different cluster-specific information -> would this be a free-form property?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each plugin would also need some application specific information to make the authentication work. Those obviously are not ideal to be exposed directly on Cluster Profile; I am wondering if we could/should formulate a way where application specific information can be discovered as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding privacy of the endpoint, I'm wondering if it is really something we should hide. I don't think we should tbh. ClusterProfiles would still be protected by RBAC so I think the endpoint is not a sensitive thing to store.
Regarding the cluster-specific info, could you share more details? I do suggest the ClusterManager provides the full data struct that would contain all the fields that are in the Cluster object defined above. Do you think we should allow modifications to it?
My assumptions is that the plugin would be installed or setup in such a way that it would be able to use the pod's volumes, service accounts etc. So they would have access to the KSA especially and other envvars that may help identify the controller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is mostly that, Cluster Profile objects are something that many different agents would access; and a lot of such agents do not use the API for authentication (they could be doing scheduling work, observability work, etc.). RBAC can help, but the thing is, with the current RBAC rules we only have object-level granularity so if the user grants an agent access to Cluster Profiles, it will have access to the auth-related data, even if the agent does not need it. Using object references, on the other hand, would allow users to set RBAC rules for the auth data specifically. Though I do agree too that data such as API server address/CA bundle are not as sensitive as tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for the cluster-specific and application-specific data, we might have gone over this brief on the Slack channel; basically, depending on how the authn workflow is implemented (there are obviously a variety of options), each plugin would need its own set of inputs.
For example:
-
for a plugin that, let's just assume can magically (and securely) acquires a service account token out of thin air, it would need the following items to complete the auth flow:
- (cluster-specific info.): API server address and CA bundle
- (application-specific info.): the service account name that represents the application (that invokes the plugin)
-
for a plugin that uses federated identities, it would need:
- (cluster-specific info.): API server address, CA bundle, expected audience for token exchange, and some platform specific information (on AKS -> tenant ID, on GKE -> workload identity provider full ID)
- (application-specific info.): (on AKS -> client ID)
-
for a plugin that uses SPIRE, it would need:
- (cluster-specific info.): API server address, CA bundle, expected audience
- (application-specific info.): spiffe ID
Considering flexibility reasons, we might need to use something free-form I assume? Of course we could just assume that everything is read from local setup (mounted volumes, configMaps, etc.). but it kinds of defeats the benefits of having cluster profiles (esp. for the cluster-specific information part). If we could have application-specific information linked with Cluster Profiles objects (this one must be done via object references, for obvious reasons), it could further simplify the flow I guess.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be more relevant to another comment I posted, but I am wondering if we could have something like (modifying the Cluster Profile API):
...
spec:
authProviders:
- type: aksoidc
clusterInfo:
# we could do either inline (arbitrary map)...
caData: ...
server: ...
audience: ...
tenantID: ...
# or use object refs
objectRef: ....
targetApplications:
- name: argo
applicationInfo:
objectRef: ...
- name: multi-kueue
applicationInfo:
objectRef: ...
This way each type of plugins could work (exec
typed or libraries, not suggesting that this KEP should touch the latter, though), and it saves the trouble of users having to mount a bunch of information manually just for auth purposes, esp. considering that the info would be cluster-specific. RBAC could be applied to the ref'd objects for better security (and less cluttering?).
Of course the example is just an initial thought for demonstration purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
application specific info are provided by the controller when setting up the plugin, never by cluster profile. ClusterProfile is not aware of the applications using it. Cluster profile says what "plugin types" work with it.
Now yes the application will have to provide that data to the plugin. That will likely be done via volume, configmaps, envvars etc. Given the plugin is exec'ed, it has a access to the same data as the controller itself (volumes/FS, environment, KSA...).
IMO having application-specific information is part of what doomed the previous KEP on credentials and we can't make that mistake again. It would require ClusterProfile to know about all the applications that can use it and how they are setup. I don't see that approach working.
The clusterprofile needs to provide all the cluster-specific information (endpoints and connectivity as well as plugin types and pointers to what authN mechanism is recommended).
Let me do another pass at the KEP to provide a new ClusterProfile example as well as a Controller example.
Fwiw, for GKE, it would leverage the federated identity, which can be obtained just by calling the metadata server, so clusterprofile doesn't need to provide us with anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having the application specific info via object refs could help simplify the process, but I would understand if it's something that we want to leave for the application itself to set up, i.e., taking an opinionated approach on the API part and making sure that any sensitive info cannot be added). To me personally, the reason why the last push wasn't successful is mostly that we were trying to encourage a practice (passing tokens across cluster boundaries) that has security concerns (even though it is commonly used) 😞.
Looking forward to seeing the new pass. It would be really, really nice if we could:
- on the API level having neutral names so that no limits are implicitly imposed on how the flow is implemented (of course for the KEP, and the current efforts in the WG, I totally understand that we are taking a
exec
plugin focused approach, but on the API side we probably shouldn't enforce this?) - we would have a field that allows placement of arbitrary cluster-specific authentication info as different authn workflows might have different requirements (though some fields might be very commonly used). I fear that not all platforms would handle the flow as GKE does (metadata server), especially for those who have a hybrid environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do think we should enforce exec for now. at least for the first pass. Then we can see how people react and bake in some flows or allow people to bake in their own flow.
Regarding the other flows, thats the beauty of it, the ClusterManager could write multiple flows for the same CP. Also the plugin type allows to pick different binaries if needed for the same set of config.
The plugin is designated by the clusterprofile, which designates the type of provider that is expected to be called. | ||
The clusterprofile has a property to designate the plugin. | ||
|
||
Property: `multicluster.sigs.io/execprovider-name` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi corentone! I assume that here we implicitly require that one cluster could have only one provider?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes but the controller could decide what plugin to use. Which also means they could use a different binary for a given provider name.
For ex, the cluster could be GKE, but as a controller I could use the official-gke-fleet plugin or official-gke-as-user plugin or i could write my own: gke-custom-plugin mechanism.
Any cluster seen as GKE would use the chosen plugin.
I am wondering if we should, since the provider might not necessarily be exec plugin based, use more neutral names for the flow. It also feels a bit that the feature might warrant its own field in the API rather than living as an extensible properties... |
I do think it should be only exec-based, that way we have a single path for now and directly start out of tree. |
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear to me how the proposed library retrieves configuration for the exec credential plugins it is supposed to use. Is the expectation for the plugins to be configured in the controller's default kubeconfig?
There are likely to be subtle differences for each controller (e.g. client ID/secret in OIDC) but the base would be the same.
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
to attach a different binary name or path for the the binary. It is expected that the library will have a mapping from | ||
its supported plugin names to the expected binary to call. It is expected that the mapping is provided via a flag on the controller. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't mapping binaries make this vendor/implementation specific? It's probably not useful to just hardcode the binary if you don't know how it's configured, right? I.e. hardcoding a binary only specifies the Command
part of the ExecConfig
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added clarification between the string and the plugin.
The string to map to plugin is the "credentials type", then the controller has to setup a plugin for that "credentials type".
A credentials Type could be: google
, azure
, direct
and then the controller would have to plug the right plugin (there may be multiple that works for a given credentials).
For google, I'm guessing one could use workload identity and therefore use a plugin called google-with-wi
which would read local files/env to use WI that is a google creds. That same, google identity could map to a google-user
which would use an interactive login?
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
I closed discussions that I think I addressed. Please reopen if you feel I didn't address or open a new comment if you have new questions following the changes. thanks! |
CC @zhujian7 if you want to take a look. Thanks. POC work in here: kubernetes-sigs/cluster-inventory-api#17 |
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
Triage notes: Still looking for more feedback as of this point and continue conversation on existing comments. @corentone intending to look into a POC. Close to being something we want hard lgtms and approves on so please look at it with that energy in mind. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a little back to think about it outside what's suggested in this KEP.
I'm curious - what would be an example of a controller that should run outside its target cluster?
Today, we usually use a kubeconfig to represents client settings to connect to a cluster, though in this case we cannot use any kind of credential references because there shall be no shared creds, and so using its hollowed-out form like you're suggesting here seems correct.
The part I do not like too much though is that we're referencing some plugins that we do not define at all - neither their configuration, nor their delivery into the end pods.
It is very likely that a ClusterManager knows both the spoke cluster's endpoint information and the proper way to authenticate to it from the outside. It would feel natural that the ClusterManager would therefore play role in the credential exchange for these spoke clusters, be it as a mediator (providing an endpoint to exchange a controller's identity for target's cluster creds using a defined API) or as the authoritative source for remote cluster authentication configuration (publishing it in the ClusterProfile in some shape or form), including instructions for retrieval of the exec plugin binary (e.g. image name and mount path for Image Volumes).
As per one of the discussions in this PR, the latter suggestion that still would use the exec creds plugins might not work too well since delivering the binary into the pods might be tricky.
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
I'm not sure what you mean.
This is the most common case. For example, we would have a bunch of cluster profiles in a hub cluster and the central controller on the hub clsuter would write resources to the target clsuters... which it doesn't run on. Hence to achieve that it needs credentials.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: bsalamat, corentone The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
// InsecureSkipTLSVerify skips the validity check for the server's certificate. | ||
// This will make your HTTPS connections insecure. | ||
// +optional | ||
InsecureSkipTLSVerify bool | ||
InsecureSkipTLSVerify bool `json:"insecure-skip-tls-verify,omitempty"` | ||
// CAData contains PEM-encoded certificate authority certificates. | ||
// If empty, system roots should be used. | ||
// +listType=atomic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot put this into CRD because the listType setting here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why?
// Example:
//
// +listType=atomic
//
// Atomic lists will be entirely replaced when updated. This tag may be
// used on any type of list (struct, scalar, ...).
//
// Using this tag will generate the following OpenAPI extension:
//
// "x-kubernetes-list-type": "atomic"
whats wrong with this? I think it applies only to CertificateAuthorityData
field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the field is a []byte, not a list or scalar. CRD generation will fail with the error must apply listType to an array, found string
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah bummer. maybe we could check with the original library, this may be a miss on their end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the comment in the original file:
To ensure that this struct contains everything someone would need to communicate
with a kubernetes cluster (just like they would via a kubeconfig), the fields
should shadow "k8s.io/client-go/tools/clientcmd/api/v1".Cluster, with the exception
of CertificateAuthority, since CA data will always be passed to the plugin as bytes.
The data type exported to clients is https://github.com/kubernetes/kubernetes/blob/a34c07971b610eb33908a743eadb4c61beeecc50/staging/src/k8s.io/client-go/pkg/apis/clientauthentication/v1/types.go#L78 which doesn’t have CertificateAuthorityData
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/kubernetes/client-go/blob/master/tools/clientcmd/api/types.go#L69-L106
Then we should use this one :) Hopefully it works for CRDs.
credentialProviders map[string]CredentialsConfig // mapping of credentials types to their config. In some cases the cluster may recognize different identity types and they may have different endpoints or TLS config. | ||
// +listType=map | ||
// +listMapKey=name | ||
credentialProviders []CredentialsConfig // mapping of credentials types to their config. In some cases the cluster may recognize different identity types and they may have different endpoints or TLS config. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since in clusterProfile spec, there is clusterManager field which means a clusterPofile can only be owned by a single clusterManager. Do we still need this to be a list here, such that controller has to select one of the config from it. Or it is clusterManager's job should be responsible to provide a single config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think even if its a single ClusterManager, there could be multiple paths to reach the cluster.
For example, a GKE cluster could be accessed via GKE Connect Gateway (the fleet endpoint) or via GKFE (the normal endpoint). Another example would be a cluster that is accessible from onprem (but requires proxy and special login etc etc) and another path via a proxy that does special checks. Also in case of transitions... a platform admin may want to offer alternative endpoints and offer a way to slowly migrate.
In general it doesn't cost us much to support a list and I think it adds flexibility.
The library implementation flow is expected to be as follow: | ||
|
||
1. Build the endpoint details of the cluster by reading properties of the ClusterProfile | ||
2. Call the CredentialsExternalProviders, following the same flow defined in [KEP 541](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/541-external-credential-providers/README.md) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar question as @stlaz, since the kubectl has built-in exec mechanism, I wonder if it's enough to just return the exec config in a kubeConfig file? The library basically runs the code from the auth package, I wonder if there is any advantage of this approach?
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
New changes are detected. LGTM label has been removed. |
|
||
* Provide a library for controllers to obtain credentials for a cluster represented by a ClusterProfile | ||
* Allow cluster managers to provide a method to obtain credentials that doesn't require to be embedded into the controller code | ||
and recompiling. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder how can the controller call the library without "recompiling"? aka, who is going to call BuildConfigFromClusterProfile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, I wonder if this is actually a goal or good to have
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imagine you are a controller writer and you want to share your controller, for example ArgoCD or MultiKueue.
You don't want to compile the controller for ALL providers. You want to ship a single docker image (or binary version/package).
If recompilation was required, that means that any divergence in credentials provider would require to fork, edit and rebuild any controller you want to use. I know for me, I would definitely not want having to recompile argocd to install it on my GKE Fleet. Nor I don't want to make PRs to add GKE support into argocd.
/retest |
keps/sig-multicluster/5339-clusterprofile-plugin-credentials/README.md
Outdated
Show resolved
Hide resolved
/retest |
@corentone: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |