Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: Re-applying CRD causes conflict #524

Open
rbjorklin opened this issue May 5, 2024 · 1 comment
Open

BUG: Re-applying CRD causes conflict #524

rbjorklin opened this issue May 5, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@rbjorklin
Copy link

Problem Description

I'm not sure if this is expected behaviour or not so I figured I would report it. If I try to re-apply the sveltosclusters.lib.projectsveltos.io CRD I get the following behaviour:

❯ kubectl apply \
        --kubeconfig ~/.kube/configs/kind-management-cluster.yaml \
        --context kind-management-cluster \
        --server-side=true \
        -f manifests

<snip>

customresourcedefinition.apiextensions.k8s.io/sveltosclusters.lib.projectsveltos.io serverside-applied
error: Apply failed with 1 conflict: conflict with "application/apply-patch": .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
ManagedFields conflict, click to expand.
❯ k get --show-managed-fields crd sveltosclusters.lib.projectsveltos.io -o yaml | yq 'del(.status) | .metadata.managedFields'
- apiVersion: apiextensions.k8s.io/v1
  fieldsType: FieldsV1
  fieldsV1:
    f:metadata:
      f:annotations:
        f:controller-gen.kubebuilder.io/version: {}
    f:spec:
      f:group: {}
      f:names:
        f:kind: {}
        f:listKind: {}
        f:plural: {}
        f:singular: {}
      f:scope: {}
      f:versions: {}
  manager: kubectl
  operation: Apply
  time: "2024-05-05T02:10:49Z"
- apiVersion: apiextensions.k8s.io/v1
  fieldsType: FieldsV1
  fieldsV1:
    f:status:
      f:acceptedNames:
        f:kind: {}
        f:listKind: {}
        f:plural: {}
        f:singular: {}
      f:conditions:
        k:{"type":"Established"}:
          .: {}
          f:lastTransitionTime: {}
          f:message: {}
          f:reason: {}
          f:status: {}
          f:type: {}
        k:{"type":"NamesAccepted"}:
          .: {}
          f:lastTransitionTime: {}
          f:message: {}
          f:reason: {}
          f:status: {}
          f:type: {}
  manager: kube-apiserver
  operation: Update
  subresource: status
  time: "2024-05-05T02:10:49Z"

System Information

INSTRUCTIONS: Provide the system and application information below.

CLUSTERAPI VERSION: v1.6.3
SVELTOS VERSION: 0.28.0
KUBERNETES VERSION: 1.28.8

Other

I can work around the problem by setting --force-conflicts=true and things seem to be okay...

@rbjorklin rbjorklin added the bug Something isn't working label May 5, 2024
@gianlucam76
Copy link
Member

Thank you @rbjorklin

So far when CRDs have changed, I tried my best to make change backward compatible but I have not implemented a proper transition (always left version to v1alpha1).

I feel we are almost ready to move to v1 at which point any change will be properly handle when Sveltos is upgraded.

Keeping this open as valid. Thank you again for reporting it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants