Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kube-prometheus-stack] INSTALLATION FAILED: create: failed to create: the server responded with the status code 413 but did not return more information (post secrets) #3205

Closed
danmgs opened this issue Apr 8, 2023 · 5 comments
Labels
bug Something isn't working lifecycle/stale

Comments

@danmgs
Copy link

danmgs commented Apr 8, 2023

Describe the bug a clear and concise description of what the bug is.

Hi

I am trying to setup this helm chart at work and I have an error:

Error: INSTALLATION FAILED: create: failed to create: the server responded with the status code 413 but did not return more information (post secrets)

Please find below a copy/paste of commands and exact outputs from powershell console.

PowerShell 7.2.9
Copyright (c) Microsoft Corporation.

https://aka.ms/powershell
Type 'help' to get help.

   A new PowerShell stable release is available: v7.3.3
   Upgrade now, or check out the release page at:
     https://aka.ms/PowerShell-Release?tag=v7.3.3

PS C:\Users\XXX> kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "alertmanagerconfigs.monitoring.coreos.com" deleted
PS C:\Users\XXX> kubectl delete crd alertmanagers.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" deleted
PS C:\Users\XXX> kubectl delete crd podmonitors.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "podmonitors.monitoring.coreos.com" deleted
PS C:\Users\XXX> kubectl delete crd probes.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "probes.monitoring.coreos.com" deleted
PS C:\Users\XXX> kubectl delete crd prometheuses.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" deleted
PS C:\Users\XXX> kubectl delete crd prometheusrules.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" deleted
PS C:\Users\XXX> kubectl delete crd servicemonitors.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" deleted
PS C:\Users\XXX> kubectl delete crd thanosrulers.monitoring.coreos.com
customresourcedefinition.apiextensions.k8s.io "thanosrulers.monitoring.coreos.com" deleted
PS C:\Users\XXX> helm install --namespace demoprometheus kube-prometheus-stack prometheus-community/kube-prometheus-stack --version  45.8.1 --debug
install.go:192: [debug] Original chart version: "45.8.1"
install.go:209: [debug] CHART PATH: C:\Users\XXX\AppData\Local\Temp\helm\repository\kube-prometheus-stack-45.8.1.tgz

client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
client.go:128: [debug] creating 1 resource(s)
install.go:165: [debug] Clearing discovery cache
wait.go:66: [debug] beginning wait for 8 resources with timeout of 1m0s
Error: INSTALLATION FAILED: create: failed to create: the server responded with the status code 413 but did not return more information (post secrets)
helm.go:84: **[debug] the server responded with the status code 413 but did not return more information (post secrets)**
create: failed to create
helm.sh/helm/v3/pkg/storage/driver.(*Secrets).Create
        helm.sh/helm/v3/pkg/storage/driver/secrets.go:164
helm.sh/helm/v3/pkg/storage.(*Storage).Create
        helm.sh/helm/v3/pkg/storage/storage.go:69
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
        helm.sh/helm/v3/pkg/action/install.go:340
main.runInstall
        helm.sh/helm/v3/cmd/helm/install.go:278
main.newInstallCmd.func2
        helm.sh/helm/v3/cmd/helm/install.go:139
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.5.0/command.go:872
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.5.0/command.go:990
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.5.0/command.go:918
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:250
runtime.goexit
        runtime/asm_amd64.s:1571
INSTALLATION FAILED
main.newInstallCmd.func2
        helm.sh/helm/v3/cmd/helm/install.go:141
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.5.0/command.go:872
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.5.0/command.go:990
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.5.0/command.go:918
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:250
runtime.goexit
        runtime/asm_amd64.s:1571
PS C:\Users\XXX>
  • I have also tried with lower versions of the helm charts, with same error.
  • I'm making sure to drop the crds before launching the helm install command etc.
  • I'm typing these command to install the helm chart to a k8s cluster in Rancher. I do not have this kind of error at Home with my Microk8s dev setup.

Any help will be appreciated,

Thank you

What's your helm version?

3.10.2

What's your kubectl version?

1.25.1

Which chart?

kube-prometheus-stack

What's the chart version?

45.8.1

What happened?

Error message is:
INSTALLATION FAILED: create: failed to create: the server responded with the status code 413 but did not return more information (post secrets)

What you expected to happen?

No response

How to reproduce it?

See command below

Enter the changed values of values.yaml?

NONE

Enter the command that you execute and failing/misfunctioning.

kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd probes.monitoring.coreos.com
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd thanosrulers.monitoring.coreos.com

helm install --namespace demoprometheus kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 45.8.1 --debug

Anything else we need to know?

No response

@danmgs danmgs added the bug Something isn't working label Apr 8, 2023
@zeritti
Copy link
Member

zeritti commented Apr 8, 2023

It looks like the release secret Helm is creating is too large (HTTP 413 Content Too Large). Secret is the default storage driver used by Helm and it might happen that the contents Helm finds for that release and is going to store in the release secret exceeds the limit 1 MiB.

However, it appears that the error does not come from kube-api-server as one would expect an error like Secret is invalid: data: Too long: must have at most 1048576 bytes. If there is a loadbalancer/proxy standing in front of your kube-api-servers, it is likely responsible for the error. If you are deploying through Rancher server whilst accessing the server via an ingress, that ingress(-controller) is probably where you would need to look to potentially increase max request body size.

@npathadet
Copy link

Having same issue while deploying via Rancher

@janisii
Copy link

janisii commented Apr 27, 2023

My kube cluster was behind nginx (rev-proxy), so I had to update nginx.conf with:

client_max_body_size 20M;

@stale
Copy link

stale bot commented Jun 10, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale
Copy link

stale bot commented Aug 12, 2023

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed Aug 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working lifecycle/stale
Projects
None yet
Development

No branches or pull requests

4 participants