Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Kubernetes Secret Engine] allow specifying a namespace selector for the allowed namespace in role. #16222

Closed
Tracked by #10
raffaelespazzoli opened this issue Jul 3, 2022 · 9 comments

Comments

@raffaelespazzoli
Copy link

Is your feature request related to a problem? Please describe.
When creating a kubernetes secret engine role, one must specify which namespaces should be allowed (https://www.vaultproject.io/api-docs/secret/kubernetes#allowed_kubernetes_namespaces). Namespaces come an go as teams do their work. So, on a very dynamic cluster one would have to constantly update the list of allowed namespaces.

Describe the solution you'd like
Allow to specify a namespace selector instead of an array of namespaces. This will select a dynamic set of namespaces based on their labels. This is the standard approach in Kubernetes to deal with these issues.

Describe alternatives you've considered
As of now, because constantly updating the same configuration is untenable, one would probably end up creating a role for each namespace.

Explain any additional use-cases

Additional context
here is an overview of labels and selector: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
here is what a label selector looks like from a API perspective: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta
there are go utility functions to quickly check if an object is selected by an label selector.

@f4z3r
Copy link
Contributor

f4z3r commented Jul 7, 2022

I am quite interested in looking at this. What needs to be achieved in terms of logic seems relatively straight-forward. What I am unsure about is how this can be solved in an elegant way on the secret engine configuration. Currently there is only a string array allowed_kubernetes_namespaces that can be configured. With the label selector we would need to support a full map, and define what happens if a user defines both the selector and the standard array (or decide to reject this in the validation).

Do you already have an idea on how this should ideally look like?

@f4z3r
Copy link
Contributor

f4z3r commented Jul 7, 2022

Ok so I will start to implement this. I think I will go with the following principles:

  • Introduce a new parameter called allowed_kubernetes_namespace_selector.
  • The new parameter behaves similar to generated_role_rules, in that it is of type string and one can pass either a YAML or JSON LabelSelector.
  • The two parameters allowed_kubernetes_namespaces and allowed_kubernetes_namespace_selector are mutually exclusive, hence a request setting both should return an error.
  • The selector (if configured) will be evaluated against a namespace every time a credential for said role is requested. This will involve an additional API call against Kubernetes, as the labels of the namespace for which the credentials are requested are not known to Vault by default (and can change at any time). This has a couple of impacts:
    • Performance. This should be a relatively minor impact, but the additional API call on credential requests introduces latency.
    • Permissions. As Vault now needs access to the labels of namespaces, it requires read permissions on namespaces. This is not a critical permission, so it shouldn't be problematic, but documentation will need to be updated.

If you have any feedback to this, please let me know. I will start the implementation on the plugin with these principles in mind, but it should not be much work to make changes afterwards.

f4z3r added a commit to f4z3r/vault-plugin-secrets-kubernetes that referenced this issue Jul 7, 2022
f4z3r added a commit to f4z3r/vault-plugin-secrets-kubernetes that referenced this issue Jul 7, 2022
@f4z3r
Copy link
Contributor

f4z3r commented Jul 7, 2022

The more I work on this, the more I have the feeling it might make sense to allow users to set both allowed_kubernetes_namespaces and allowed_kubernetes_namespace_selector, and that it essentially ANDs the values if both are set. This way I would argue that the only illegal configurations would be to set neither, or to only set allowed_kubernetes_namespaces with an empty array.

I feel like this would give the most flexibility to users, does not break backwards-compatibility, and also makes some parts of the code a little cleaner in terms of behaviour (e.g. what happens on an update? Can a user try to "clear" allowed_kubernetes_namespaces by setting it to an empty array and set allowed_kubernetes_namespace_selector instead? These questions quickly make the behaviour a little dodgy if we only allow one of both to be set IMO)

f4z3r added a commit to f4z3r/vault-plugin-secrets-kubernetes that referenced this issue Jul 7, 2022
…rameters can be supplied on roles, add integration tests for allowed_kubernetes_namespace_selector
f4z3r added a commit to f4z3r/vault that referenced this issue Jul 7, 2022
@raffaelespazzoli
Copy link
Author

I was on PTO. Thanks for picking this up so quickly. I'd have done very similar choices to your, the only thing I'd change is that I'd make the two allowed_kubernetes_namespaces and allowed_kubernetes_namespace_selector mutually exclusive rather than in OR. It's just a matter of style.

Because using a namespaces selector allows to essentially set a super-set of conditions relative to what you can do with allowed_kubernetes_namespaces, one might also consider using only allowed_kubernetes_namespace_selector` in the API. But it would involve an API change.
To offset the performance hit due to the additional call, you can create a cached watcher (as normal operators do) for namespaces.
Finally this change could also be propagated to the kube auth engine.

@f4z3r
Copy link
Contributor

f4z3r commented Jul 9, 2022

@raffaelespazzoli sure no worries. Regarding the mutual exclusivity, I agree. The issue I encountered was that when updating the configuration, I was unsure how to handle this.

For example, I would prohibit setting both parameters on initial creation of the role. Now suppose someone configures the role to use the standard allowed_kubernetes_namespaces. After a while he wants to use the selector. How does he clear the value of the allowed namespaces? At the moment an update will allow to merge configurations, such that you can update a single parameter without having to provide all parametera again. I see three options here:

  1. We realise that this is an update instead of a create, clear the previously allowed namespaces and set the new selector (instead of throwing an error that both are not allowed) While this is elegant in terms of use, it means that some parameter "clearing" happens implicitely in the background, which I am not a fan of.
  2. We force the user to provide both parameters on the update, and the allowed_kubernetes_namespaces array must be empty. This would be more explicit, but I find it kind of funny to say they are exclusive, yet you have to provide both parameters.
  3. Allow to set the selector directly in the allowed_kubernetes_namespaces. That way we do not introduce a new parameter and sidestep the two issues above. I am not super sure this is inline with the general "configuration-style" of Vault, and would need to check how this could be validated (to allow yaml or json selectors, and a comma separated list of strings).

Let me know what you think. I will check the feasibility of the two first points next week, and see how elegantly this can be solved.

For the performance I would leave it like this for now unless it becomes an issue (which I doubt). I don't want to "pre-optimize" the code, and using a watcher would require an additional list permission on namespaces, which might be problematic for some users.

Regarding the auth method, yes I can have a look at it next week as well. Not yet sure exactly when, but will definitely have the time at some point.

@f4z3r
Copy link
Contributor

f4z3r commented Jul 13, 2022

@raffaelespazzoli I thought about this a little more. I believe that in theory option 3 would be the most elegant. This would make the two mutually exclusive, and would be elegant in terms of how the update of the role are handled. However, implementing this would change the typing for the API, and would therefore not be backward compatible with typed clients. Therefore I don't think it makes sense to implement it.

The two other options I find not very elegant.

Regarding your point:

using a namespaces selector allows to essentially set a super-set of conditions relative to what you can do with allowed_kubernetes_namespaces

while this is true, I don't see why this points to mutual exclusivity. Do you care to explain this a little more? The way I see it, people can:

  • Set only the allowed_kubernetes_namespaces with a list of namespaces or the wildcard. This needs to be preserved in order to avoid a breaking change.
  • Set only the allowed_kubernetes_namespace_selector to have more flexibility, such as in your use case, where the namespaces might be added after the role has been configured.
  • Set both. While I do see that the use case for setting both is much less prominent than the two others, I think this still makes sense. The reason I think this is mostly because it will map 1 to 1 to how permissions can be provided to the Vault ServiceAccount. Vault might be granted permissions to handle SAs, secrets, etc in namespaces both based on label selectors, and on namespace names at the same time. If we do not allow to set both here in the role, it would mean that we could grant permissions to our Vault on Kubernetes level that cannot be expressed in the role configuration. This would be a little sad to restrict. The only reason I could then think to not allow it, is if it would introduce major complexity. However this is not the case here.

Does my reasoning make sense to you? Or do you see this differently?

f4z3r added a commit to f4z3r/vault that referenced this issue Jul 13, 2022
…setting both allowed_kubernetes_namespaces and allowed_kubernetes_namespace_selector parameters for role configuration
@f4z3r
Copy link
Contributor

f4z3r commented Jul 13, 2022

Regarding the change on the auth engine, I will open an issue that references this on the plugin project, but I will not yet implement it. The auth plugin does not use the standard Kubernetes client, and implementing the namespace label getting will require relatively large changes to the code if we don't want to duplicate too much. I will therefore wait and see what the decision is here, and then implement it once it is clear what way we are going.

@raffaelespazzoli
Copy link
Author

I'm good with the design you are proposing @f4z3r

heatherezell pushed a commit that referenced this issue Jul 19, 2022
… LabelSelector (#16240)

* docs(#16222): add documentation for changes in PR hashicorp/vault-plugin-secrets-kubernetes#10

* docs(#16222): add changelog entry

* docs(#16222): improve documentation to make the use case of setting both allowed_kubernetes_namespaces and allowed_kubernetes_namespace_selector parameters for role configuration
f4z3r added a commit to f4z3r/vault-plugin-secrets-kubernetes that referenced this issue Jul 22, 2022
f4z3r added a commit to f4z3r/vault-plugin-secrets-kubernetes that referenced this issue Jul 22, 2022
f4z3r added a commit to f4z3r/vault-plugin-secrets-kubernetes that referenced this issue Jul 22, 2022
…rameters can be supplied on roles, add integration tests for allowed_kubernetes_namespace_selector
@tvoran
Copy link
Member

tvoran commented Sep 16, 2022

Thanks for the discussion here folks, this will be included in Vault 1.12 via hashicorp/vault-plugin-secrets-kubernetes#10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants