Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: create: failed to create: the server responded with the status code 413 #171

Closed
HamzaZo opened this issue Nov 3, 2020 · 14 comments
Closed

Comments

@HamzaZo
Copy link

HamzaZo commented Nov 3, 2020

Hello Community

I'm facing an error while using the plugin to migrate prometheus-operator chart from Helm 2 -> Helm 3

The release is well deployed on Helm 2

➜  ./helm2 ls
NAME                        	REVISION	UPDATED                 	STATUS  	CHART                             	APP VERSION	NAMESPACE
prometheus-operator         	1       	Tue Nov  3 11:19:58 2020	DEPLOYED	kube-prometheus-stack-9.4.10      	0.38.1     	myns

Output of error

➜ helm3 2to3 convert  prometheus-operator --tiller-ns tiller 
2020/11/03 12:36:39 Release "prometheus-operator" will be converted from Helm v2 to Helm v3.
2020/11/03 12:36:39 [Helm 3] Release "prometheus-operator" will be created.
2020/11/03 12:36:41 [Helm 3] ReleaseVersion "prometheus-operator.v1" will be created.
Error: create: failed to create: the server responded with the status code 413 but did not return more information (post secrets)
Error: plugin "2to3" exited with error

I don't understand why the plugin said that the release is too large for him

Any help are appreciate

cc @hickeyma

@hickeyma
Copy link
Collaborator

hickeyma commented Nov 3, 2020

@HamzaZo Thanks for raising the issue. I will try and reproduce this issue as unsure what is causing this. Where did you install the chart from?

@hickeyma
Copy link
Collaborator

hickeyma commented Nov 3, 2020

@HamzaZo Might be worth looking at issue in Helm: helm/helm#4471 and How to fix error 413: Request Entity Too Large in Kubernetes and Helm blog, to see if you have setup something in the deployment to cause this?

@HamzaZo
Copy link
Author

HamzaZo commented Nov 3, 2020

@hickeyma Thank you for your reply,

Where did you install the chart from? In a kubernetes GKE cluster using the stable prometheus chart

@hickeyma
Copy link
Collaborator

hickeyma commented Nov 3, 2020

Sorry, where did you get the chart? Want to make sure I have the version you used.

@HamzaZo
Copy link
Author

HamzaZo commented Nov 3, 2020

@HamzaZo
Copy link
Author

HamzaZo commented Nov 4, 2020

Hello @hickeyma

I think the issue is related to 8281, but unfortunately there is no update on the issue

@hickeyma
Copy link
Collaborator

hickeyma commented Nov 4, 2020

So, tried this out by installing the prometheus stack first as follows using Helm 2:

$ helm2 install --name prom-stack prometheus-community/kube-prometheus-stack
NAME:   prom-stack
LAST DEPLOYED: Wed Nov  4 11:29:48 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Alertmanager
NAME                                     AGE
prom-stack-kube-prometheus-alertmanager  35s

==> v1/ClusterRole
NAME                                       AGE
prom-stack-grafana-clusterrole             35s
prom-stack-kube-prometheus-operator        35s
prom-stack-kube-prometheus-operator-psp    35s
prom-stack-kube-prometheus-prometheus      35s
prom-stack-kube-prometheus-prometheus-psp  35s
prom-stack-kube-state-metrics              35s
psp-prom-stack-kube-state-metrics          35s
psp-prom-stack-prometheus-node-exporter    35s

==> v1/ClusterRoleBinding
NAME                                       AGE
prom-stack-grafana-clusterrolebinding      35s
prom-stack-kube-prometheus-operator        35s
prom-stack-kube-prometheus-operator-psp    35s
prom-stack-kube-prometheus-prometheus      35s
prom-stack-kube-prometheus-prometheus-psp  35s
prom-stack-kube-state-metrics              35s
psp-prom-stack-kube-state-metrics          35s
psp-prom-stack-prometheus-node-exporter    35s

==> v1/ConfigMap
NAME                                                          DATA  AGE
prom-stack-grafana                                            1     35s
prom-stack-grafana-config-dashboards                          1     35s
prom-stack-grafana-test                                       1     35s
prom-stack-kube-prometheus-apiserver                          1     35s
prom-stack-kube-prometheus-cluster-total                      1     35s
prom-stack-kube-prometheus-controller-manager                 1     35s
prom-stack-kube-prometheus-etcd                               1     35s
prom-stack-kube-prometheus-grafana-datasource                 1     35s
prom-stack-kube-prometheus-k8s-coredns                        1     35s
prom-stack-kube-prometheus-k8s-resources-cluster              1     35s
prom-stack-kube-prometheus-k8s-resources-namespace            1     35s
prom-stack-kube-prometheus-k8s-resources-node                 1     35s
prom-stack-kube-prometheus-k8s-resources-pod                  1     35s
prom-stack-kube-prometheus-k8s-resources-workload             1     35s
prom-stack-kube-prometheus-k8s-resources-workloads-namespace  1     35s
prom-stack-kube-prometheus-kubelet                            1     35s
prom-stack-kube-prometheus-namespace-by-pod                   1     35s
prom-stack-kube-prometheus-namespace-by-workload              1     35s
prom-stack-kube-prometheus-node-cluster-rsrc-use              1     35s
prom-stack-kube-prometheus-node-rsrc-use                      1     35s
prom-stack-kube-prometheus-nodes                              1     35s
prom-stack-kube-prometheus-persistentvolumesusage             1     35s
prom-stack-kube-prometheus-pod-total                          1     35s
prom-stack-kube-prometheus-prometheus                         1     35s
prom-stack-kube-prometheus-proxy                              1     35s
prom-stack-kube-prometheus-scheduler                          1     35s
prom-stack-kube-prometheus-statefulset                        1     35s
prom-stack-kube-prometheus-workload-total                     1     35s

==> v1/DaemonSet
NAME                                 DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
prom-stack-prometheus-node-exporter  1        1        1      1           1          <none>         35s

==> v1/Deployment
NAME                                 READY  UP-TO-DATE  AVAILABLE  AGE
prom-stack-grafana                   1/1    1           1          35s
prom-stack-kube-prometheus-operator  1/1    1           1          35s
prom-stack-kube-state-metrics        1/1    1           1          35s

==> v1/MutatingWebhookConfiguration
NAME                                  AGE
prom-stack-kube-prometheus-admission  35s

==> v1/Pod(related)
NAME                                                  READY  STATUS   RESTARTS  AGE
prom-stack-grafana-8948448f8-7n2vb                    2/2    Running  0         35s
prom-stack-kube-prometheus-operator-59c4fd6f8f-7gscq  1/1    Running  0         35s
prom-stack-kube-state-metrics-5475f474c5-jsmgq        1/1    Running  0         35s
prom-stack-prometheus-node-exporter-cfp9r             1/1    Running  0         35s

==> v1/Prometheus
NAME                                   AGE
prom-stack-kube-prometheus-prometheus  34s

==> v1/PrometheusRule
NAME                                                             AGE
prom-stack-kube-prometheus-alertmanager.rules                    33s
prom-stack-kube-prometheus-etcd                                  33s
prom-stack-kube-prometheus-general.rules                         33s
prom-stack-kube-prometheus-k8s.rules                             33s
prom-stack-kube-prometheus-kube-apiserver-availability.rules     33s
prom-stack-kube-prometheus-kube-apiserver-slos                   33s
prom-stack-kube-prometheus-kube-apiserver.rules                  33s
prom-stack-kube-prometheus-kube-prometheus-general.rules         33s
prom-stack-kube-prometheus-kube-prometheus-node-recording.rules  33s
prom-stack-kube-prometheus-kube-scheduler.rules                  33s
prom-stack-kube-prometheus-kube-state-metrics                    33s
prom-stack-kube-prometheus-kubelet.rules                         33s
prom-stack-kube-prometheus-kubernetes-apps                       33s
prom-stack-kube-prometheus-kubernetes-resources                  33s
prom-stack-kube-prometheus-kubernetes-storage                    33s
prom-stack-kube-prometheus-kubernetes-system                     33s
prom-stack-kube-prometheus-kubernetes-system-apiserver           33s
prom-stack-kube-prometheus-kubernetes-system-controller-manager  33s
prom-stack-kube-prometheus-kubernetes-system-kubelet             33s
prom-stack-kube-prometheus-kubernetes-system-scheduler           33s
prom-stack-kube-prometheus-node-exporter                         33s
prom-stack-kube-prometheus-node-exporter.rules                   33s
prom-stack-kube-prometheus-node-network                          33s
prom-stack-kube-prometheus-node.rules                            33s
prom-stack-kube-prometheus-prometheus                            33s
prom-stack-kube-prometheus-prometheus-operator                   33s

==> v1/Role
NAME                                     AGE
prom-stack-grafana-test                  35s
prom-stack-kube-prometheus-alertmanager  35s

==> v1/RoleBinding
NAME                                     AGE
prom-stack-grafana-test                  35s
prom-stack-kube-prometheus-alertmanager  35s

==> v1/Secret
NAME                                                  TYPE    DATA  AGE
alertmanager-prom-stack-kube-prometheus-alertmanager  Opaque  1     35s
prom-stack-grafana                                    Opaque  3     35s

==> v1/Service
NAME                                                TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
prom-stack-grafana                                  ClusterIP  10.109.241.77   <none>       80/TCP     35s
prom-stack-kube-prometheus-alertmanager             ClusterIP  10.106.139.165  <none>       9093/TCP   35s
prom-stack-kube-prometheus-coredns                  ClusterIP  None            <none>       9153/TCP   35s
prom-stack-kube-prometheus-kube-controller-manager  ClusterIP  None            <none>       10252/TCP  35s
prom-stack-kube-prometheus-kube-etcd                ClusterIP  None            <none>       2379/TCP   35s
prom-stack-kube-prometheus-kube-proxy               ClusterIP  None            <none>       10249/TCP  35s
prom-stack-kube-prometheus-kube-scheduler           ClusterIP  None            <none>       10251/TCP  35s
prom-stack-kube-prometheus-operator                 ClusterIP  10.107.210.103  <none>       443/TCP    35s
prom-stack-kube-prometheus-prometheus               ClusterIP  10.106.27.44    <none>       9090/TCP   35s
prom-stack-kube-state-metrics                       ClusterIP  10.96.48.188    <none>       8080/TCP   35s
prom-stack-prometheus-node-exporter                 ClusterIP  10.105.98.86    <none>       9100/TCP   35s

==> v1/ServiceAccount
NAME                                     SECRETS  AGE
prom-stack-grafana                       1        35s
prom-stack-grafana-test                  1        35s
prom-stack-kube-prometheus-alertmanager  1        35s
prom-stack-kube-prometheus-operator      1        35s
prom-stack-kube-prometheus-prometheus    1        35s
prom-stack-kube-state-metrics            1        35s
prom-stack-prometheus-node-exporter      1        35s

==> v1/ServiceMonitor
NAME                                                AGE
prom-stack-kube-prometheus-alertmanager             33s
prom-stack-kube-prometheus-apiserver                33s
prom-stack-kube-prometheus-coredns                  33s
prom-stack-kube-prometheus-grafana                  33s
prom-stack-kube-prometheus-kube-controller-manager  33s
prom-stack-kube-prometheus-kube-etcd                33s
prom-stack-kube-prometheus-kube-proxy               33s
prom-stack-kube-prometheus-kube-scheduler           33s
prom-stack-kube-prometheus-kube-state-metrics       33s
prom-stack-kube-prometheus-kubelet                  33s
prom-stack-kube-prometheus-node-exporter            33s
prom-stack-kube-prometheus-operator                 33s
prom-stack-kube-prometheus-prometheus               33s

==> v1/ValidatingWebhookConfiguration
NAME                                  AGE
prom-stack-kube-prometheus-admission  33s

==> v1beta1/PodSecurityPolicy
NAME                                     PRIV   CAPS      SELINUX           RUNASUSER  FSGROUP    SUPGROUP  READONLYROOTFS  VOLUMES
prom-stack-grafana                       false  RunAsAny  RunAsAny          RunAsAny   RunAsAny   false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
prom-stack-grafana-test                  false  RunAsAny  RunAsAny          RunAsAny   RunAsAny   false     configMap,downwardAPI,emptyDir,projected,secret
prom-stack-kube-prometheus-alertmanager  false  RunAsAny  RunAsAny          MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
prom-stack-kube-prometheus-operator      false  RunAsAny  RunAsAny          MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
prom-stack-kube-prometheus-prometheus    false  RunAsAny  RunAsAny          MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
prom-stack-kube-state-metrics            false  RunAsAny  MustRunAsNonRoot  MustRunAs  MustRunAs  false     secret
prom-stack-prometheus-node-exporter      false  RunAsAny  RunAsAny          MustRunAs  MustRunAs  false     configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim,hostPath

==> v1beta1/Role
NAME                AGE
prom-stack-grafana  35s

==> v1beta1/RoleBinding
NAME                AGE
prom-stack-grafana  35s


NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace default get pods -l "release=prom-stack"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.

@hickeyma
Copy link
Collaborator

hickeyma commented Nov 4, 2020

When I do the migration it succeeds as follows:

$ helm2 ls
NAME      	REVISION	UPDATED                 	STATUS  	CHART                       	APP VERSION	NAMESPACE
prom-stack	1       	Wed Nov  4 11:29:48 2020	DEPLOYED	kube-prometheus-stack-11.0.0	0.43.0     	default  

$ helm3 2to3 convert prom-stack
2020/11/04 11:42:05 Release "prom-stack" will be converted from Helm v2 to Helm v3.
2020/11/04 11:42:05 [Helm 3] Release "prom-stack" will be created.
2020/11/04 11:42:06 [Helm 3] ReleaseVersion "prom-stack.v1" will be created.
2020/11/04 11:42:09 [Helm 3] ReleaseVersion "prom-stack.v1" created.
2020/11/04 11:42:09 [Helm 3] Release "prom-stack" created.
2020/11/04 11:42:09 Release "prom-stack" was converted successfully from Helm v2 to Helm v3.
2020/11/04 11:42:09 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.
2020/11/04 11:42:09 v2 release information should only be removed using `helm 2to3` cleanup and when all releases have been migrated over.

$ helm3 ls
NAME      	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART                       	APP VERSION
prom-stack	default  	1       	2020-11-04 11:29:48.625201578 +0000 UTC	deployed	kube-prometheus-stack-11.0.0	0.43.0

@hickeyma
Copy link
Collaborator

hickeyma commented Nov 4, 2020

As I am unable to reproduce this, I wonder has it to do with the cluster and amount of resources deployed already in it? I am using a clean Kind cluster and the install take a good long time as a lot of resources are deployed in a prometheus stack install.

Even just looking at the secrets and configmaps alone, you have:

$ kubectl get secrets --all-namespaces | grep "prom-stack"
default              alertmanager-prom-stack-kube-prometheus-alertmanager              Opaque                                1      10m
default              alertmanager-prom-stack-kube-prometheus-alertmanager-generated    Opaque                                1      10m
default              alertmanager-prom-stack-kube-prometheus-alertmanager-tls-assets   Opaque                                0      10m
default              prom-stack-grafana                                                Opaque                                3      10m
default              prom-stack-grafana-test-token-tpt2v                               kubernetes.io/service-account-token   3      10m
default              prom-stack-grafana-token-b6vgs                                    kubernetes.io/service-account-token   3      10m
default              prom-stack-kube-prometheus-admission                              Opaque                                3      27m
default              prom-stack-kube-prometheus-alertmanager-token-k8pkq               kubernetes.io/service-account-token   3      10m
default              prom-stack-kube-prometheus-operator-token-kxb9b                   kubernetes.io/service-account-token   3      10m
default              prom-stack-kube-prometheus-prometheus-token-h7489                 kubernetes.io/service-account-token   3      10m
default              prom-stack-kube-state-metrics-token-hlffh                         kubernetes.io/service-account-token   3      10m
default              prom-stack-prometheus-node-exporter-token-hqbdm                   kubernetes.io/service-account-token   3      10m
default              prometheus-prom-stack-kube-prometheus-prometheus                  Opaque                                1      10m
default              prometheus-prom-stack-kube-prometheus-prometheus-tls-assets       Opaque                                1      10m

$ kubectl get configmap --all-namespaces | grep "prom-stack"
default              prom-stack-grafana                                             1      10m
default              prom-stack-grafana-config-dashboards                           1      10m
default              prom-stack-grafana-test                                        1      10m
default              prom-stack-kube-prometheus-apiserver                           1      10m
default              prom-stack-kube-prometheus-cluster-total                       1      10m
default              prom-stack-kube-prometheus-controller-manager                  1      10m
default              prom-stack-kube-prometheus-etcd                                1      10m
default              prom-stack-kube-prometheus-grafana-datasource                  1      10m
default              prom-stack-kube-prometheus-k8s-coredns                         1      10m
default              prom-stack-kube-prometheus-k8s-resources-cluster               1      10m
default              prom-stack-kube-prometheus-k8s-resources-namespace             1      10m
default              prom-stack-kube-prometheus-k8s-resources-node                  1      10m
default              prom-stack-kube-prometheus-k8s-resources-pod                   1      10m
default              prom-stack-kube-prometheus-k8s-resources-workload              1      10m
default              prom-stack-kube-prometheus-k8s-resources-workloads-namespace   1      10m
default              prom-stack-kube-prometheus-kubelet                             1      10m
default              prom-stack-kube-prometheus-namespace-by-pod                    1      10m
default              prom-stack-kube-prometheus-namespace-by-workload               1      10m
default              prom-stack-kube-prometheus-node-cluster-rsrc-use               1      10m
default              prom-stack-kube-prometheus-node-rsrc-use                       1      10m
default              prom-stack-kube-prometheus-nodes                               1      10m
default              prom-stack-kube-prometheus-persistentvolumesusage              1      10m
default              prom-stack-kube-prometheus-pod-total                           1      10m
default              prom-stack-kube-prometheus-prometheus                          1      10m
default              prom-stack-kube-prometheus-proxy                               1      10m
default              prom-stack-kube-prometheus-scheduler                           1      10m
default              prom-stack-kube-prometheus-statefulset                         1      10m
default              prom-stack-kube-prometheus-workload-total                      1      10m
default              prometheus-prom-stack-kube-prometheus-prometheus-rulefiles-0   26     10m
kube-system          prom-stack.v1                                                  1      10m

That is why I should the full output of the Helm 2 install of the chart in #171 (comment).

Could it be that the cluster cannot deploy any more secrets when it hits a limit? In other words, the error limit is not for the Helm release secret but for the cluster limit?

Note: I was able to install the prometheus stack chart using Helm v3 as well.

@prajnadnayak
Copy link

I am getting the 413 error with both ConfigMaps and Secrets.

Error: the server responded with the status code 413 but did not return more information (post configmaps)
helm.go:75: [debug] the server responded with the status code 413 but did not return more information (post configmaps)

The size of the chart is less than 1MB. I have deleted and recreated the namespace where the release was deployed just to make sure the incremented release history is not taking up the space. I am not sure what else to check. Any help is appreciated.

@hickeyma
Copy link
Collaborator

@HamzaZo I think that this issue is a Helm related issue as raised in 8281. I think it is outside the scope of the plugin.
If so, can this issue be closed and tracked in 8281 instead?

@HamzaZo
Copy link
Author

HamzaZo commented Nov 11, 2020

@hickeyma Yeah we can close it thanks for your reply

@HamzaZo HamzaZo closed this as completed Nov 11, 2020
@Belyenochi
Copy link

我在ConfigMap和Secrets上都收到413错误。

错误:服务器以状态码413响应但未返回更多信息(发布配置
映射)helm.go:75:[调试]服务器以状态码413响应但未返回更多信息(发布配置映射)

图表的大小小于1MB。我已经删除并重新创建了用于发布版本的名称空间,只是为了确保增加的版本历史记录不会占用空间。我不确定还需要检查什么。任何帮助表示赞赏。

same

@cod-r
Copy link

cod-r commented Jan 31, 2022

I had this error message when trying to install kube-prometheus-stack.

I have an nginx load balancer in front of kube-apiserver for high availability.
Note: don't confuse this with nginx ingress controler.

By default nginx has client_max_body_size 1m.
To solve the problem I had to increase this setting by editing /etc/nginx/nginx.conf in my ubuntu VM:

http {
    client_max_body_size 10m;
}

And if you're using Rancher you need to edit the nginx-ingress-controller's configMap from the local cluster where Rancher is installed and add:

data:
  proxy-body-size: 10m

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants