New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ApplySet : kubectl apply --prune
redesign and graduation strategy
#3659
Comments
/assign @KnVerey |
Hello @KnVerey 👋, Enhancements team here. Just checking in as we approach enhancements freeze on 18:00 PDT Thursday 9th February 2023. This enhancement is targeting for stage Here's where this enhancement currently stands:
For this enhancement, it looks like #3661 will address most of these requirements.
The status of this enhancement is marked as |
This enhancement meets now meets all of the requirements to be tracked in v1.27. |
kubectl apply --prune
redesign and graduation strategykubectl apply --prune
redesign and graduation strategy
Docs placeholder PR: kubernetes/website#39818 |
Hi @KnVerey 👋, Checking in as we approach 1.27 code freeze at 17:00 PDT on Tuesday 14th March 2023. Please ensure the following items are completed:
Please let me know if there are any PRs in k/k I should be tracking for this KEP. As always, we are here to help should questions come up. Thanks! |
Hi @KnVerey, I’m reaching out from the 1.27 Release Docs team. This enhancement is marked as ‘Needs Docs’ for the 1.27 release. Please follow the steps detailed in the documentation to open a PR against dev-1.27 branch in the k/website repo. This PR can be just a placeholder at this time, and must be created by March 16. For more information, please take a look at Documenting for a release to familiarize yourself with the documentation requirements for the release. Please feel free to reach out with any questions. Thanks! |
Unfortunately the implementation PRs associated with this enhancement have not merged by code-freeze so this enhancement is getting removed from the release. If you would like to file an exception please see https://github.com/kubernetes/sig-release/blob/master/releases/EXCEPTIONS.md /milestone clear |
Hi @marosset they did make the release actually! You can see them here: https://github.com/orgs/kubernetes/projects/128/views/2. I will update the issue description. The only feature we were originally targeting as part of the first alpha that did not make it was |
/milestonve v1.27 |
@KnVerey I added this issue back into v1.27. |
BTW, nearly all the labels we register are using subdomains of If you want to make life easier for end users, get an exception in to change the labels, before beta (ideally, before the alpha release). I know it's a bit later, but it looks like we missed that detail in earlier reviews. See https://kubernetes.io/docs/reference/labels-annotations-taints/ for the list of registered keys that we use for labels and annotations. |
/milestone v1.27 (there was a typo in the last attempt to apply this) |
@KnVerey is there a way I can contribute to this ? |
Yes, we'll have plenty of work to do on this for v1.28! Some of it still needs to be defined through KEP updates before it can be started though. Please reach out in the sig-cli channel on Kubernetes Slack. |
/assign @justinsb |
Hi! I'm looking to use applysets and struggling to understand how to use them at the cluster scope. The KEP seems to suggest that kubectl apply -n myapp --prune --applyset=namespaces/myapp -f . My use case is that I apply a big apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ConfigMap
data: {}
- ... I get this error: $ /usr/local/bin/kubectl --kubeconfig= --cluster= --context= --user= apply --server-side --applyset=automata --prune -f -
error: namespace is required to use namespace-scoped ApplySet |
So, I ended up making a custom resource specifically for the ApplySet, but actually getting it to work is tricky. kubectl can't create the custom resourceSo, unlike with ConfigMaps and Secrets, kubectl cannot create the custom resource. error: custom resource ApplySet parents cannot be created automatically Missing tooling annotationThe annotation error: ApplySet parent object "applysets.starjunk.net/automata" already exists and is missing required annotation "applyset.kubernetes.io/tooling" Missing ApplySet ID labelSo, now I have to replicate this by hand?... Sure, here's a go.dev/play. error: ApplySet parent object "applysets.starjunk.net/automata" exists and does not have required label applyset.kubernetes.io/id Missing contains-group-resources annotationThe value of this annotation will be tedious to replicate by hand. Fortunately, it can be blank. error: parsing ApplySet annotation on "applysets.starjunk.net/automata": kubectl requires the "applyset.kubernetes.io/contains-group-resources" annotation to be set on all ApplySet parent objects Server-Side conflictsIt looks like because I had to create those fields manually, and I did so with server-side apply, there are now conflicts which need to be resolved. The fix is to defermanagement of those fields to kubectl, see here. error: Apply failed with 1 conflict: conflict with "kubectl-applyset": .metadata.annotations.applyset.kubernetes.io/tooling
statefulset.apps/vault serverside-applied
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts After that was all said and done, it looks like this now works as expected! https://github.com/uhthomas/automata/actions/runs/4942497931 I really hope my feedback is helpful. Let me know if there's anything I can do to help. |
Also, not sure if it's relevant but there are lots of warnings of throttling.
|
This may also be worth thinking about: spotahome/redis-operator#592. In some cases, it can lead to data loss. I'm not sure if this is any worse than the original implementation of prune, to be fair. |
I think the examples list namespaces as current potential apply set parents, however at the moment the tooling doesn't allow that. The examples say errors that it isn't allowed. Mainly I thought this might be a natural place for a very declarative approach. E.g the apply set covers the entire namespace, add to the apply set to add more resources. I think also, while I completely understand and agree with 'apply set should only change one namespace' in practice this makes it a bit tricky as common tools do seem to span multiple namespaces quite often. E.g cert-manager/cert-manager#5471. For cert-manager I usually patch it to not affected kube-system, but it gets confusing quickly :). So from above, I've pretty quickly hit the 'now I have to create a my own CRD', to have the additional namespaces capability. It also appears that a namespace parent (e.g a secret) can't span multiple namespaces, so if you do need to change 2x namespaces, you need a cluster resource anyway. Despite some understandable alpha hiccups, it's actually pretty use-able! though. I'd say best UX at the moment is to heavily use it with Kustomize, so you can wrangle other software into working with it. |
@btrepp @uhthomas I would like to transition to applysets, however face the namespace problem - you seem to have created custom CRDs which could be used as a applyset. Unfortunately I couldn't find the respective resources. Do you or someone else know of plug and play applyset CRDs which can be used for seamless cluster-wide pruning?
|
@schlichtanders I believe this comment should have everything you need? Let me know if there's more I can do to help. |
@uhthomas, I found this commit by you which seems to suggest that you could successfully simplified the setup by using some kubectl commands. Unfortunately I couldn't find the corresponding commands. Can you help? |
@schlichtanders To be clear, there are no kubectl commands which simplify this setup. You must create a CRD and custom resource as expalined in my other comment. You then must follow what I've written to create the appropriate annotations and labels for the custom resource, which can be removed later as kubectl will take over. The only command which is run for all of this is |
thank you thomas for the clarification 🙏 I now compiled my # for details on the annotation see https://kubernetes.io/docs/reference/labels-annotations-taints/
# the applyset.kubernetes.io/id is depending on the group, however kubectl will complain and show you the correct id to use anyway
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "applysets.jolin.io"
labels:
applyset.kubernetes.io/is-parent-type: "true"
spec:
group: "jolin.io"
names:
kind: "ApplySet"
plural: "applysets"
scope: Cluster
versions:
- name: "v1"
served: true
storage: true
schema:
openAPIV3Schema:
type: "object"
---
apiVersion: "jolin.io/v1"
kind: "ApplySet"
metadata:
name: "applyset"
annotations:
applyset.kubernetes.io/tooling: "kubectl/1.28"
applyset.kubernetes.io/contains-group-resources: ""
labels:
applyset.kubernetes.io/id: "applyset-TFtfhJJK3oDKzE2aMUXgFU1UcLI0RI8PoIyJf5F_kuI-v1" I need to deploy the above yaml first kubectl apply --server-side --force-conflicts -f applyset.yaml and can then run kubectl with applyset, similar to how you mentioned: KUBECTL_APPLYSET=true kubectl apply --server-side --force-conflicts --applyset=applyset.jolin.io/applyset --prune -f my-k8s-deployment.yaml Seems to work so far 🥳 Note:
For more up-to-date information on all the annotation, see https://kubernetes.io/docs/reference/labels-annotations-taints/ |
Glad you were able to get it working. I also mentioned this in my original comment, but the ID is generated here and can be generated in-browser with this program I wrote. Good to know it tells you what it should be anyway, so I guess trial and error works too. |
Enhancement Description
kubectl apply
's pruning feature, which enables cleanup of previously applied objects that have been removed from desired state provided to the currentapply
.k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update PR(s):Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
/sig cli
The text was updated successfully, but these errors were encountered: