Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource template cannot be reconciled again due to cache modification #3878

Closed
whitewindmills opened this issue Aug 2, 2023 · 8 comments · Fixed by #3879
Closed

Resource template cannot be reconciled again due to cache modification #3878

whitewindmills opened this issue Aug 2, 2023 · 8 comments · Fixed by #3879
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@whitewindmills
Copy link
Member

whitewindmills commented Aug 2, 2023

What happened:
I deleted propagation policy default/test-pp to unbind from configmap default/test-cm, but find that configmap default/test-cm cannot be reconciled again.
image

that means configmap default/test-cm cannot be matched by new propagation policy default/test-pp-1.
image

What you expected to happen:
configmap default/test-cm can be matched by new propagation policy default/test-pp-1.

How to reproduce it (as minimally and precisely as possible):

  1. create such configmap(keep labels non-nil):
apiVersion: v1
data:
  a: b
kind: ConfigMap
metadata:
  labels:
    a: b
  name: test-cm
  namespace: default
  1. create propagation policy.
  2. delete propagation policy after configmap is propagated.
  3. check if configmap can be reconcile again.

Anything else we need to know?:
detector deletes directly policy labels from cached resource template.

workload, err := helper.FetchResourceTemplate(d.DynamicClient, d.InformerManager, d.RESTMapper, objRef)
if err != nil {
// do nothing if resource template not exist, it might has been removed.
if apierrors.IsNotFound(err) {
return nil
}
klog.Errorf("Failed to fetch resource(kind=%s, %s/%s): %v", objRef.Kind, objRef.Namespace, objRef.Name, err)
return err
}
workloadLabels := workload.GetLabels()
for _, l := range labels {
delete(workloadLabels, l)
}
workload.SetLabels(workloadLabels)

Environment:

  • Karmada version:
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
  • Others:
@whitewindmills whitewindmills added the kind/bug Categorizes issue or PR as related to a bug. label Aug 2, 2023
@whitewindmills
Copy link
Member Author

/assign

@RainbowMango
Copy link
Member

RainbowMango commented Aug 2, 2023

newWorkload, err := d.DynamicClient.Resource(gvr).Namespace(workload.GetNamespace()).Update(context.TODO(), workload, metav1.UpdateOptions{})

detector deletes directly policy labels from cached resource template.

Do you mean here actually updates the cache?

@whitewindmills
Copy link
Member Author

Do you mean here actually updates the cache?

here, workload is from the informer cache.

workload.SetLabels(workloadLabels)

@RainbowMango
Copy link
Member

Yeah I get it, It mistakenly updated the cache, but I still don't understand why the update event would be ignored.

@whitewindmills
Copy link
Member Author

it's ignored by the function SpecificationChanged.

if !SpecificationChanged(unstructuredOldObj, unstructuredNewObj) {
klog.V(4).Infof("Ignore update event of object (kind=%s, %s/%s) as specification no change", unstructuredOldObj.GetKind(), unstructuredOldObj.GetNamespace(), unstructuredOldObj.GetName())
return
}

it removes uncessary fields and calls reflect.DeepEqual to compare if the remaining fields actually changed.
// SpecificationChanged check if the specification of the given object change or not
func SpecificationChanged(oldObj, newObj *unstructured.Unstructured) bool {
oldBackup := oldObj.DeepCopy()
newBackup := newObj.DeepCopy()
// Remove the status and some system defined mutable fields in metadata, including managedFields and resourceVersion.
// Refer to https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta for more details.
removedFields := [][]string{{"status"}, {"metadata", "managedFields"}, {"metadata", "resourceVersion"}}
for _, r := range removedFields {
unstructured.RemoveNestedField(oldBackup.Object, r...)
unstructured.RemoveNestedField(newBackup.Object, r...)
}
return !reflect.DeepEqual(oldBackup, newBackup)
}

@RainbowMango
Copy link
Member

Do you mean oldObj comes from cache and newObj is the updated object, since the labels have already been removed from the cache, so the oldOjb is always equal to newOjb?

@whitewindmills
Copy link
Member Author

since the labels have already been removed from the cache, so the oldOjb is always equal to newOjb?

yes

@RainbowMango
Copy link
Member

Thanks. That makes sense now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants