-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server-side apply bumps resourceVersion when re-applying the same object #124605
Comments
(cc @fabriziopandini @chrischdi @alvaroaleman @vincepri, might be interesting for you) |
/sig api-machinery |
It looks less a bug but more of a intended behavior to me. |
Bypassing the writes triggered by noop applies would break storage version migration (#123344), so we're committed to supporting this behavior as a feature. |
@cici37 @jpbetz I am confused. Are you saying this is intended behavior for a NOP SSA apply to increment resourceVersion? This seems to not actually happen in all cases, only for certain objects/edge cases (see #121404). For example, try this:
No resourceVersion is incremented above). Surprisingly I couldn't find any clear statement that SSA doesn't increment the resource version, but https://kubernetes.io/blog/2022/10/20/advanced-server-side-apply/ does say "a no-op apply is not very much more work for the API server than an extra GET" which seems impossible if resourceVersion is incremented |
https://github.com/kubernetes/kubernetes/pull/123344/files#r1506112409 is probably only updating it because either the fieldManager changes or in the patch body the apiVersion is different (or both, perhaps)? |
We need this in storage version migration to make sure we correctly handles the CRD schema changes. See Joe's comments here - https://github.com/kubernetes/kubernetes/pull/123344/files#r1508023986 If it helps to understand this better, we have tests to validate this. https://github.com/kubernetes/kubernetes/blob/master/test/integration/storageversionmigrator/storageversionmigrator_test.go |
I may have misspoke. When storage versions changes, a NOP SSA apply will trigger a resourceVersion bump. If the storage version is unchanged and an update would be a NOP for etcd, I believe we short-circuit the write (here I think?). This might explain the configMap case. |
For the reported issue: We've seen cases like this before where the time gets bumped causing a NOP to become a write. This could be improved. Note that this isn't technically a bug because the API server is allowed to bump the resourceVersion for any write (the discussion about storage version changes causing this is a good example of where this can happen in a way that the caller wouldn't expect). But I'd like to see the etcd write optimized away for cases like this. |
Sounds similar to #121404 perhaps which had some investigation. FWIW as a controller author my expectation is that repeatedly applying does not repeatedly trigger resourceVersion increments. It's fine if it happens rarely (storage version change, etc) but if it happens consistently that means our own writes re-trigger reconciliation and we have an infinite loop of writes. AFAIK all the marketing around writing SSA controllers has been that you just apply what you want and don't try to do a get+diff+apply. Even if we did want to do that, it doesn't really work well currently (#115563). If we aren't able to rely on NOP writes (usually) being NOPs, I don't see how we are supposed to use SSA to write a controller at all? |
This is a really good point. I agree. |
#106388 looks like it was intended fix this sort of problem. I'm looking into why it doesn't work for this case. |
kubernetes/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/equality.go Line 55 in a9a1cdb
ManagedFieldsEntry (for the slow path equality check in IgnoreManagedFieldsTimestampsTransformer).
|
#125263 is a draft proposal for a fix. Feedback welcome on how to make this fix cleaner. I'll test carefully before moving it out of draft. |
What happened?
When continuously re-applying the same object via a controller or kubectl with SSA the resourceVersion increases every time.
What did you expect to happen?
I expected the resourceVersion to stay the same if the client always sends the same object via SSA, especially if the resulting object doesn't change (apart from resourceVersion & managedFields, but these are managed by the apiserver).
How can we reproduce it (as minimally and precisely as possible)?
Apply this CRD:
crd_machines.yaml
Apply this CR:
cr_machine.yaml
Get state of the CR after first apply
k get machine -o yaml --show-managed-fields > /tmp/machine-1.yaml
Apply the CR again
Get state of the CR after second apply
k get machine -o yaml --show-managed-fields > /tmp/machine-2.yaml
Diff
=> The resourceVersion and the "time" field of the managedField entry of the "test-manager" increases.
Anything else we need to know?
The use case we have is a lot more complex than this simple scenario. The tl;dr is that we want to implement caching for our SSA calls, so that we don't have SSA calls on every reconcile of our controller if nothing actually changes. This is relatively hard to do if the resourceVersion increases after every apply.
I took a closer look at the APIserver and ~ the following happens:
infrastructureRef
hasx-kubernetes-map-type: atomic
which means thenamespace
field will be droppedOn a high-level:
I think there are many similar cases that would lead to this behavior. Basically whenever the admission chain mutates a field previously set via SSA.
Notes:
Kubernetes version
I also tried this with client-go v0.30 and kube-apiserver v1.30. Same result.
OS version
Running this natively on Apple Silicon M2. But can't imagine this makes a difference. Also got the same results in Docker Desktop on Mac.
The text was updated successfully, but these errors were encountered: