Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: The CustomResourceDefinition "poolers.postgresql.cnpg.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes #4377

Closed
4 tasks done
remotejob opened this issue Apr 25, 2024 · 7 comments
Assignees
Labels
triage Pending triage

Comments

@remotejob
Copy link

Is there an existing issue already for this bug?

  • I have searched for an existing issue, and could not find anything. I believe this is a new bug.

I have read the troubleshooting guide

  • I have read the troubleshooting guide and I think this is a new bug.

I am running a supported version of CloudNativePG

  • I have read the troubleshooting guide and I think this is a new bug.

Contact Details

aleksander.mazurov@gmail.com

Version

1.23.0

What version of Kubernetes are you using?

1.29

What is your Kubernetes environment?

Self-managed: k3s

How did you install the operator?

YAML manifest

What happened?

I successfully install cnpg-1.21.3.yaml but all versions after have an error
The CustomResourceDefinition "poolers.postgresql.cnpg.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

Command:
k apply -f https://github.com/cloudnative-pg/cloudnative-pg/releases/download/v1.22.3/cnpg-1.22.3.yaml

Cluster resource

No response

Relevant log output

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@remotejob remotejob added the triage Pending triage label Apr 25, 2024
@achanda
Copy link
Contributor

achanda commented Apr 26, 2024

@remotejob try a server side apply

k apply -f https://github.com/cloudnative-pg/cloudnative-pg/releases/download/v1.22.3/cnpg-1.22.3.yaml --server-side

@remotejob
Copy link
Author

Thanks. OK

As I understand I can't change the default PORT

postgresql:
parameters:
port: "5431"

The Cluster "cluster-pg" is invalid: spec.postgresql.parameters.port: Invalid value: "5431": Can't set fixed configuration parameter

Anyway Result for install was:

namespace/cnpg-system serverside-applied
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.cnpg.io serverside-applied
customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.cnpg.io serverside-applied
customresourcedefinition.apiextensions.k8s.io/poolers.postgresql.cnpg.io serverside-applied
customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.cnpg.io serverside-applied
serviceaccount/cnpg-manager serverside-applied
clusterrole.rbac.authorization.k8s.io/cnpg-manager serverside-applied
clusterrolebinding.rbac.authorization.k8s.io/cnpg-manager-rolebinding serverside-applied
configmap/cnpg-default-monitoring serverside-applied
service/cnpg-webhook-service serverside-applied
deployment.apps/cnpg-controller-manager serverside-applied
Apply failed with 3 conflicts: conflicts with "kubectl-client-side-apply" using admissionregistration.k8s.io/v1:

  • .webhooks[name="mbackup.cnpg.io"].rules
  • .webhooks[name="mcluster.cnpg.io"].rules
  • .webhooks[name="mscheduledbackup.cnpg.io"].rules
    Please review the fields above--they currently have other managers. Here
    are the ways you can resolve this warning:
  • If you intend to manage all of these fields, please re-run the apply
    command with the --force-conflicts flag.
  • If you do not intend to manage all of the fields, please edit your
    manifest to remove references to the fields that should keep their
    current managers.
  • You may co-own fields by updating your manifest to match the existing
    value; in this case, you'll become the manager if the other manager(s)
    stop managing the field (remove it from their configuration).
    See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts

So my question: How resolve conflicts or I can continue without resolve ?

@christiaangoossens
Copy link

Can confirm this error happens, specifically through ArgoCD. Server-side apply applies it fine.

@graphenn
Copy link

graphenn commented May 6, 2024

It also happens at my environment, back to 1.22.1 is OK

Version
1.23.1

What version of Kubernetes are you using?
1.26.4

What is your Kubernetes environment?
Self-managed: kubekey created

How did you install the operator?
YAML manifest

@DevBey
Copy link

DevBey commented May 7, 2024

for argocd @christiaangoossens, enabling - ServerSideApply=true works

@TecIntelli
Copy link

TecIntelli commented May 26, 2024

We can confirm this issue on a MicroK8s cluster, channel 1.29 (Kubernetes v1.29), with the provided cloudnative-pg addon, starting from cloudnative-pg version 1.22.2

The CustomResourceDefinition "poolers.postgresql.cnpg.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

Cloudnative-pg versions 1.22.2, 1.22.3, 1.23.0 and 1.23.1 show the same behaviour.

We modified the mentioned MicroK8s addon enable script manually (/var/snap/microk8s/common/addons/community/addons/cloudnative-pg/enable) by adding --server-side to line 42:
apply_wait=$("${SNAP_DATA}"/bin/kubectl-cnpg install generate | $KUBECTL apply --server-side -f - > /dev/null) and the addon installed without error.

However, do we need to change it to server side apply permanently or are there any ideas to fix this, to keep on client side apply?
Comparison with Client-Side Apply

Edit: Ohhh I have missed the change notes in release 1.22.2. Therefor I guess cloudnative-pg will stay on server side apply.

@sxd
Copy link
Member

sxd commented Jun 18, 2024

Hello,

This was announced in the release 1.22.2 and it's on the release notes here: https://github.com/cloudnative-pg/cloudnative-pg/blob/main/docs/src/release_notes/v1.22.md#changes-2

Regards,

@sxd sxd closed this as completed Jun 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage Pending triage
Projects
None yet
Development

No branches or pull requests

8 participants