allow specifying additional/default labels via command line #938
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This change adds the ability to set additional labels or provide default values for deployment commands. This also works for ejson secrets.
ejson-keys
as shared secret will not be labled.The functionality was not made available on
krane render
due to potentially confusing behaviour around labels on secrets when usingkrane render … | krane deploy -f secrets.ejson -f -
.Allowing labels specified in the templates take precedence is an intentional choice. It is the more flexible approach and allows customization for edge cases like migrations and "nested" deployments.
see #682
What are you trying to accomplish with this PR?
Setting standard/well known labels like
app.kubernetes.io/name
on all Kubernetes objects, but especially Secrets created through the secrets.ejson handling.How is this accomplished?
New top-level CLI parameter on
krane deploy
andkrane global-deploy
that is passed through to the resource or ejson provider, where it they get merged into the object definition.What could go wrong?
Edge cases around
nil
handling from the extra_labels side, as well as the.metadata.labels
definition on the k8s object/resource side. In practice this shows up as "failed to deploy" type of error, if and only if the resource is part of the deployment templates. This might happen again if new special handlers for Resources are written that are not covered by the test in test/unit/krane/kubernetes_resource/kubernetes_resource_test.rb.Considered Alternatives
selector
: The dual use for pruning makes it undesirable. If theselector
and old k8s objects are removed in the same deployment run, the old objects will not be pruned. Expecting the user to remember this edge case is unrealistic, especially since it's unlikely to happen often..metadata.labels
like key tosecrets.ejson
(perhaps at at.kubernetes_secrets.*._labels
): I didn't test this, but it's probably possible. The reason we didn't go for this was our need to set labels on all objects, not just secrets. It would imply we need either a) manual effort of adding labels to all object definitions, or b) tooling to update these in source control in ERB files, or c) some extra tool that patches thesecrets.ejson
before passing it tokrane render … | krane deploy -f secrets.ejson -f -
. So while conceptually simpler in krane, it has limited utility in organically grown systems – or at least ours. Hence it was discarded early on.On CLA
Your CLA flow claims someone signed for my account already, although we're not sure who anymore. I also have recent permission by someone with signing power from my org. Therefore I expect this at most to require some bureaucratic cleanup, but not present a blocker.