Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synchronization for ConfigMaps over 262144 bytes does not work, when Replace=true flag is specified #7131

Open
3 tasks done
earthquakesan opened this issue Sep 1, 2021 · 4 comments · Fixed by #7137
Open
3 tasks done
Assignees
Labels
bug Something isn't working regression Bug is a regression, should be handled with high priority

Comments

@earthquakesan
Copy link

Checklist:

  • I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • I've included steps to reproduce the bug.
  • I've pasted the output of argocd version.

Describe the bug

Synchronization for ConfigMaps over 262144 bytes does not work, when Replace=true flag is specified.
Related issues: #5704 #820

To Reproduce

Tested on minikube:

NAME       STATUS   ROLES                  AGE   VERSION
minikube   Ready    control-plane,master   77s   v1.21.2

Run the following steps to reproduce:

# Install the latest argocd version
helm upgrade --install --version 3.17.5 argocd argo/argo-cd

# Install the application
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: rancher-monitoring-crd
  annotations:
    argocd.argoproj.io/sync-wave: "-10"
spec:
  destination:
    namespace: 'default'
    server: 'https://kubernetes.default.svc'
  source:
    path: rancher-monitoring-crd
    repoURL: 'https://github.com/earthquakesan/argocd-test-repo.git'
    targetRevision: main
  project: default
  syncPolicy:
    automated:
      selfHeal: true
      prune: true
    syncOptions:
    - CreateNamespace=true
    - Replace=true
EOF

# Get password to connect to the web UI
kubectl get secret argocd-initial-admin-secret -o jsonpath='{ .data.password }' | base64 --decode

# forward ports
kubectl port-forward svc/argocd-server 8080:80

Open http://localhost:8080 in the browser. Login with "admin" and password you got earlier. Open the application, you will see that it failed to synchronize because of: ConfigMap "rancher-monitoring-crd-manifest" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

Expected behavior

Synchronization for ConfigMaps over 262144 bytes works, when Replace=true flag is specified.

Screenshots

image

Version

Affected versions (helm chart - argocd version):

  • 3.17.5 - 2.1.1 (broken)
  • 3.13.0 - 2.1.0 (broken)

Not affected versions (helm chart - argocd version):

  • 3.12.1 - 2.0.5 (ok)
  • 3.7.1 - 2.0.4 (ok)

The regression is introduced between v2.0.5 and v2.1.0 releases.

Logs

N/A see how to reproduce to get the logs
@earthquakesan earthquakesan added the bug Something isn't working label Sep 1, 2021
@jessesuen jessesuen added the regression Bug is a regression, should be handled with high priority label Sep 1, 2021
@armenr
Copy link

armenr commented Dec 29, 2021

This still appears to be an issue. I've seeing it currently.

I tried to read around, google a bit, but haven't come up with much...and I'm not sure how to fix or debug this. ANY help would be much appreciated.

In ArgoCD, the Error being displayed is:

Failed sync attempt to fa68c5c19c15882e88f303478b91b9cabbec7d39: one or more objects failed to apply, reason: CustomResourceDefinition.apiextensions.k8s.io "applicationsets.argoproj.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

I've adapted my approach by following the same pattern that argocd-autopilot takes with its bootstrap method...with my own slight modifications.

This is where the code lives: https://github.com/armenr/5thK8s/tree/main/dependencies/bootstrap

After installing and configuring argo-cd, this is the only file I kubectl apply -f in order to "bootstrap" all the other ArgoCD projects and apps:

https://github.com/armenr/5thK8s/blob/main/dependencies/bootstrap/autopilot-bootstrap.yaml

@survivant
Copy link

I have argocd :
v2.4.0+a67b97d

image

image

I don't have that cm in my cluster

root@lnx-kub04:/tmp# kubectl -n global get cm
NAME                                                              DATA   AGE
ingress-controller-leader                                         0      123d
ingress-nginx-global-ingressnginx-controller                      2      36m
kube-resource-report-global-nginx                                 2      36m
kube-root-ca.crt                                                  1      123d
metallb-global-config                                             1      36m
monitoring-stack-global-confluent-open-source-grafana-dashboard   1      35m
monitoring-stack-global-grafana                                   2      35m
monitoring-stack-global-grafana-config-dashboards                 1      35m
monitoring-stack-global-grafana-test                              1      35m
monitoring-stack-global-k8s-persistence-volumes                   1      35m
monitoring-stack-global-ku-alertmanager-overview                  1      35m
monitoring-stack-global-ku-apiserver                              1      35m
monitoring-stack-global-ku-cluster-total                          1      35m
monitoring-stack-global-ku-controller-manager                     1      35m
monitoring-stack-global-ku-etcd                                   1      35m
monitoring-stack-global-ku-grafana-datasource                     1      35m
monitoring-stack-global-ku-k8s-coredns                            1      35m
monitoring-stack-global-ku-k8s-resources-cluster                  1      35m
monitoring-stack-global-ku-k8s-resources-namespace                1      35m
monitoring-stack-global-ku-k8s-resources-node                     1      35m
monitoring-stack-global-ku-k8s-resources-pod                      1      35m
monitoring-stack-global-ku-k8s-resources-workload                 1      35m
monitoring-stack-global-ku-k8s-resources-workloads-namespace      1      35m
monitoring-stack-global-ku-kubelet                                1      35m
monitoring-stack-global-ku-namespace-by-pod                       1      35m
monitoring-stack-global-ku-namespace-by-workload                  1      35m
monitoring-stack-global-ku-node-cluster-rsrc-use                  1      35m
monitoring-stack-global-ku-node-rsrc-use                          1      35m
monitoring-stack-global-ku-nodes                                  1      35m
monitoring-stack-global-ku-persistentvolumesusage                 1      35m
monitoring-stack-global-ku-pod-total                              1      35m
monitoring-stack-global-ku-prometheus                             1      35m
monitoring-stack-global-ku-proxy                                  1      35m
monitoring-stack-global-ku-scheduler                              1      35m
monitoring-stack-global-ku-statefulset                            1      35m
monitoring-stack-global-ku-workload-total                         1      35m
monitoring-stack-global-node-problem-detector-custom-config       0      35m
monitoring-stack-global-op-cstor-overview                         1      35m
monitoring-stack-global-op-cstor-pool                             1      35m
monitoring-stack-global-op-cstor-volume                           1      35m
monitoring-stack-global-op-cstor-volume-replica                   1      35m
monitoring-stack-global-op-jiva-volume                            1      35m
monitoring-stack-global-op-localpv-workload                       1      35m
monitoring-stack-global-op-lvmlocalpv-pool                        1      35m
monitoring-stack-global-op-ndm                                    1      35m
monitoring-stack-global-op-npd-node-volume-problem                1      35m
monitoring-stack-global-op-zfslocalpv                             1      35m
prometheus-monitoring-stack-global-ku-prometheus-rulefiles-0      34     35m
root@lnx-kub04:/tmp#

I'm not able to delete the resource from the UI
image

the desired manifest is too big for the UI
image

@survivant
Copy link

the sync didn't work, but if I use the --replace flag it works. I'll use that as a workaround.

@crenshaw-dev crenshaw-dev changed the title Regression on v2.1.0: Synchronization for ConfigMaps over 262144 bytes does not work, when Replace=true flag is specified Synchronization for ConfigMaps over 262144 bytes does not work, when Replace=true flag is specified Jun 28, 2022
@crenshaw-dev crenshaw-dev reopened this Jun 28, 2022
@iam-veeramalla
Copy link
Member

iam-veeramalla commented Aug 23, 2022

Hi @crenshaw-dev , I see that you have re-opened the issue. Any reason ?

I was testing this feature for one of the users and everything is working as expected.

This is the dummy configmap which I used for testing.
https://github.com/iam-veeramalla/argocd-example-apps/tree/master/large-cm

Steps:

  1. Install OpenShift-GitOps operator v1.6.0
  2. Create the Argo CD Application as shown below. This will deploy the Configmap with 215.25 KB of JSON data which is usually not allowed using kubectl apply.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
 name: dummy-large-cm
 namespace: openshift-gitops
spec:
 destination:
   namespace: openshift-gitops
   server: 'https://kubernetes.default.svc'
 project: default
 source:
   path: large-cm
   repoURL: 'https://github.com/iam-veeramalla/argocd-example-apps'
 syncPolicy:
 automated: {}
syncOptions:
- Replace=true
  1. The one that's doing the magic is
syncOptions:
- Replace=true
 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working regression Bug is a regression, should be handled with high priority
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants