New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fatal error: concurrent map iteration and map write #4753
Comments
looks like the stack trace occurs right when prometheus metrics are being gathered here: Lines 209 to 210 in e7d93f2
|
note: the feature to add a prometheus metrics endpoint was introduced way back in v2.4.0 in #2171. I'm not sure what would cause this bug to trigger. |
May be we need to update prometheus dependency? |
We're continuing to see this issue. |
Also experienced this error
This is using Tiller outside the cluster also |
Still getting this on Tiller 2.14.0, although very rarely
|
same here with helm 2.14.1:
|
same here with 2.14.2 fatal error: concurrent map iteration and map write ( goroutine 27 [running]: goroutine 1 [select]: |
We have exact the same issue. |
closing as inactive. |
I have exactly the same error message.
|
It's rare, but I ran into this today as well. Attaching my stacktrace since my version and line numbers are a bit different than others already posted.
This was with It didn't happen again when I retried the failed job. |
I'm not sure it was appropriate to close this issue. At least not with the reason given, as it was certainly not "inactive" at the time of closure. |
I would suggest to reopen this issue too, as it seems to affect version 2.16.7 too:
|
Thanks for reopening this issue @bacongobbler . |
Not sure. Please submit bug fixes when you identify the issue causing this symptom. Gentle reminder that Helm 2 will no longer accept bug fixes after August 13th, so if you identify a bug, please submit a patch soon before the patch window has closed. See the blog post for more details: https://helm.sh/blog/covid-19-extending-helm-v2-bug-fixes/ |
We are no longer accepting bug fixes for Helm 2. See https://helm.sh/blog/helm-v2-deprecation-timeline/ for more details. |
I do not have concrete steps to reproduce this as this happens intermittently, but this happens when starting tiller (we are trying out starting tiller outside the cluster [tillerless] )
Output of
helm version
: v2.11.0 (both client and tiller)Output of
kubectl version
: v1.10.7Cloud Provider/Platform (AKS, GKE, Minikube etc.): aws
Panic stack trace:
fatal error: concurrent map iteration and map write
goroutine 9 [running]:
runtime.throw(0x165df4d, 0x26)
runtime.mapiternext(0xc4208b2dd0)
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).GetServiceInfo(0xc4203a7900, 0x78d7222a)
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).InitializeMetrics(0xc420238090, 0xc4203a7900)
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.Register(0xc4203a7900)
main.start.func2(0xc4200465a0)
runtime.goexit()
created by main.start
goroutine 1 [select]:
main.start()
main.main()
goroutine 38 [chan receive]:
k8s.io/helm/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x22429c0)
created by k8s.io/helm/vendor/github.com/golang/glog.init.0
goroutine 18 [syscall]:
os/signal.signal_recv(0x0)
os/signal.loop()
created by os/signal.init.0
goroutine 8 [runnable]:
internal/poll.(*pollDesc).waitRead(0xc420462598, 0xffffffffffffff00, 0x0, 0x0)
internal/poll.(*FD).Accept(0xc420462580, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
net.(*netFD).accept(0xc420462580, 0x1539a80, 0x71b97f725a704524, 0xc420459600)
net.(*TCPListener).accept(0xc42000c560, 0x4297b9, 0xc4201a0440, 0xc420459650)
net.(*TCPListener).Accept(0xc42000c560, 0x16d11d0, 0xc4203a7900, 0x17873c0, 0xc42000c560)
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).Serve(0xc4203a7900, 0x17873c0, 0xc42000c560, 0x0, 0x0)
main.start.func1(0x17a16a0, 0xc4203cc510, 0x17873c0, 0xc42000c560, 0xc420046540)
created by main.start
The text was updated successfully, but these errors were encountered: