Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fatal error: concurrent map iteration and map write #4753

Closed
rajatjindal opened this issue Oct 9, 2018 · 17 comments
Closed

fatal error: concurrent map iteration and map write #4753

rajatjindal opened this issue Oct 9, 2018 · 17 comments
Labels
bug Categorizes issue or PR as related to a bug. unconfirmed v2.x Issues and Pull Requests related to the major version v2

Comments

@rajatjindal
Copy link
Contributor

I do not have concrete steps to reproduce this as this happens intermittently, but this happens when starting tiller (we are trying out starting tiller outside the cluster [tillerless] )

Output of helm version: v2.11.0 (both client and tiller)

Output of kubectl version: v1.10.7

Cloud Provider/Platform (AKS, GKE, Minikube etc.): aws

Panic stack trace:

fatal error: concurrent map iteration and map write

goroutine 9 [running]:

runtime.throw(0x165df4d, 0x26)

/usr/local/go/src/runtime/panic.go:616 +0x81 fp=0xc4208b2ba8 sp=0xc4208b2b88 pc=0x42aca1

runtime.mapiternext(0xc4208b2dd0)

/usr/local/go/src/runtime/hashmap.go:747 +0x55c fp=0xc4208b2c38 sp=0xc4208b2ba8 pc=0x408b0c

k8s.io/helm/vendor/google.golang.org/grpc.(*Server).GetServiceInfo(0xc4203a7900, 0x78d7222a)

/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:396 +0x3f5 fp=0xc4208b2e40 sp=0xc4208b2c38 pc=0x7f9475

k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).InitializeMetrics(0xc420238090, 0xc4203a7900)

/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.go:109 +0x50 fp=0xc4208b2f60 sp=0xc4208b2e40 pc=0x807290

k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.Register(0xc4203a7900)

/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.go:38 +0x37 fp=0xc4208b2f80 sp=0xc4208b2f60 pc=0x806a97

main.start.func2(0xc4200465a0)

/go/src/k8s.io/helm/cmd/tiller/tiller.go:210 +0x3f fp=0xc4208b2fd8 sp=0xc4208b2f80 pc=0x12b82df

runtime.goexit()

/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc4208b2fe0 sp=0xc4208b2fd8 pc=0x459d51

created by main.start

/go/src/k8s.io/helm/cmd/tiller/tiller.go:206 +0xb63

goroutine 1 [select]:

main.start()

/go/src/k8s.io/helm/cmd/tiller/tiller.go:220 +0xc5a

main.main()

/go/src/k8s.io/helm/cmd/tiller/tiller.go:115 +0x105

goroutine 38 [chan receive]:

k8s.io/helm/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x22429c0)

/go/src/k8s.io/helm/vendor/github.com/golang/glog/glog.go:879 +0x8b

created by k8s.io/helm/vendor/github.com/golang/glog.init.0

/go/src/k8s.io/helm/vendor/github.com/golang/glog/glog.go:410 +0x203

goroutine 18 [syscall]:

os/signal.signal_recv(0x0)

/usr/local/go/src/runtime/sigqueue.go:139 +0xa6

os/signal.loop()

/usr/local/go/src/os/signal/signal_unix.go:22 +0x22

created by os/signal.init.0

/usr/local/go/src/os/signal/signal_unix.go:28 +0x41

goroutine 8 [runnable]:

internal/poll.(*pollDesc).waitRead(0xc420462598, 0xffffffffffffff00, 0x0, 0x0)

/usr/local/go/src/internal/poll/fd_poll_runtime.go:89 +0x60

internal/poll.(*FD).Accept(0xc420462580, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)

/usr/local/go/src/internal/poll/fd_unix.go:372 +0x1a8

net.(*netFD).accept(0xc420462580, 0x1539a80, 0x71b97f725a704524, 0xc420459600)

/usr/local/go/src/net/fd_unix.go:238 +0x42

net.(*TCPListener).accept(0xc42000c560, 0x4297b9, 0xc4201a0440, 0xc420459650)

/usr/local/go/src/net/tcpsock_posix.go:136 +0x2e

net.(*TCPListener).Accept(0xc42000c560, 0x16d11d0, 0xc4203a7900, 0x17873c0, 0xc42000c560)

/usr/local/go/src/net/tcpsock.go:259 +0x49

k8s.io/helm/vendor/google.golang.org/grpc.(*Server).Serve(0xc4203a7900, 0x17873c0, 0xc42000c560, 0x0, 0x0)

/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:463 +0x179

main.start.func1(0x17a16a0, 0xc4203cc510, 0x17873c0, 0xc42000c560, 0xc420046540)

/go/src/k8s.io/helm/cmd/tiller/tiller.go:201 +0x11c

created by main.start

/go/src/k8s.io/helm/cmd/tiller/tiller.go:197 +0xb3e
@bacongobbler
Copy link
Member

looks like the stack trace occurs right when prometheus metrics are being gathered here:

helm/cmd/tiller/tiller.go

Lines 209 to 210 in e7d93f2

// Register gRPC server to prometheus to initialized matrix
goprom.Register(rootServer)

@bacongobbler
Copy link
Member

bacongobbler commented Oct 9, 2018

note: the feature to add a prometheus metrics endpoint was introduced way back in v2.4.0 in #2171. I'm not sure what would cause this bug to trigger.

@bacongobbler bacongobbler added bug Categorizes issue or PR as related to a bug. unconfirmed labels Oct 9, 2018
@rajatjindal
Copy link
Contributor Author

prometheus/prometheus#3735

May be we need to update prometheus dependency?

@johngmyers
Copy link

We're continuing to see this issue.

@2ZZ
Copy link

2ZZ commented Apr 24, 2019

Also experienced this error

[main] 2019/04/24 12:05:17 Starting Tiller v2.13.1 (tls=false)
13:05:17  [main] 2019/04/24 12:05:17 GRPC listening on 127.0.0.1:44134
13:05:17  [main] 2019/04/24 12:05:17 Probes listening on :44135
13:05:17  [main] 2019/04/24 12:05:17 Storage driver is Secret
13:05:17  [main] 2019/04/24 12:05:17 Max history per release is 0
13:05:17  fatal error: concurrent map iteration and map write
13:05:17  
13:05:17  goroutine 10 [running]:
13:05:17  runtime.throw(0x16d4b37, 0x26)
13:05:17  	/usr/local/go/src/runtime/panic.go:608 +0x72 fp=0xc0004dcb90 sp=0xc0004dcb60 pc=0x42c2a2
13:05:17  runtime.mapiternext(0xc0004dcdd0)
13:05:17  	/usr/local/go/src/runtime/map.go:790 +0x525 fp=0xc0004dcc18 sp=0xc0004dcb90 pc=0x40eb85
13:05:17  runtime.mapiterinit(0x14bd440, 0xc0004aaea0, 0xc0004dcdd0)
13:05:17  	/usr/local/go/src/runtime/map.go:780 +0x1c4 fp=0xc0004dcc38 sp=0xc0004dcc18 pc=0x40e564
13:05:17  k8s.io/helm/vendor/google.golang.org/grpc.(*Server).GetServiceInfo(0xc0000a7080, 0x412412)
13:05:17  	/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:452 +0x96 fp=0xc0004dce40 sp=0xc0004dcc38 pc=0x854376
13:05:17  k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).InitializeMetrics(0xc0001e0090, 0xc0000a7080)
13:05:17  	/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.go:109 +0x50 fp=0xc0004dcf60 sp=0xc0004dce40 pc=0x86d0d0
13:05:17  k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.Register(0xc0000a7080)
13:05:17  	/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.go:38 +0x37 fp=0xc0004dcf80 sp=0xc0004dcf60 pc=0x86c887
13:05:17  main.start.func2(0xc000088a20)
13:05:17  	/go/src/k8s.io/helm/cmd/tiller/tiller.go:213 +0x3f fp=0xc0004dcfd8 sp=0xc0004dcf80 pc=0x137513f
13:05:17  runtime.goexit()
13:05:17  	/usr/local/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc0004dcfe0 sp=0xc0004dcfd8 pc=0x45be71
13:05:17  created by main.start
13:05:17  	/go/src/k8s.io/helm/cmd/tiller/tiller.go:209 +0xaf4
13:05:17  
13:05:17  goroutine 1 [select]:
13:05:17  main.start()
13:05:17  	/go/src/k8s.io/helm/cmd/tiller/tiller.go:223 +0xbdb
13:05:17  main.main()
13:05:17  	/go/src/k8s.io/helm/cmd/tiller/tiller.go:118 +0x10b
13:05:17  
13:05:17  goroutine 24 [chan receive]:
13:05:17  k8s.io/helm/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x2734be0)
13:05:17  	/go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:941 +0x8b
13:05:17  created by k8s.io/helm/vendor/k8s.io/klog.init.0
13:05:17  	/go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:403 +0x69
13:05:17  
13:05:17  goroutine 27 [syscall]:
13:05:17  os/signal.signal_recv(0x0)
13:05:17  	/usr/local/go/src/runtime/sigqueue.go:139 +0x9c
13:05:17  os/signal.loop()
13:05:17  	/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
13:05:17  created by os/signal.init.0
13:05:17  	/usr/local/go/src/os/signal/signal_unix.go:29 +0x41
13:05:17  
13:05:17  goroutine 9 [runnable]:
13:05:17  internal/poll.accept(0x3, 0x174b400, 0x0, 0x0, 0x7f4e48559ee0, 0x457120, 0x7f4e4a9686c0, 0x203016)
13:05:17  	/usr/local/go/src/internal/poll/sock_cloexec.go:16 +0x310
13:05:17  internal/poll.(*FD).Accept(0xc000498900, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
13:05:17  	/usr/local/go/src/internal/poll/fd_unix.go:377 +0xd1
13:05:17  net.(*netFD).accept(0xc000498900, 0x9c00000000000000, 0xc0000a7168, 0x9cfaa8a5d3fdccab)
13:05:17  	/usr/local/go/src/net/fd_unix.go:238 +0x42
13:05:17  net.(*TCPListener).accept(0xc0000a2ac8, 0x42b002, 0xc0000a85b0, 0xc0005efe78)
13:05:17  	/usr/local/go/src/net/tcpsock_posix.go:139 +0x2e
13:05:17  net.(*TCPListener).Accept(0xc0000a2ac8, 0x174cee8, 0xc0000a7080, 0xc000530060, 0x0)
13:05:17  	/usr/local/go/src/net/tcpsock.go:260 +0x47
13:05:17  k8s.io/helm/vendor/google.golang.org/grpc.(*Server).Serve(0xc0000a7080, 0x187c940, 0xc0000a2ac8, 0x0, 0x0)
13:05:17  	/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:556 +0x210
13:05:17  main.start.func1(0xc00016cc60, 0x187c940, 0xc0000a2ac8, 0xc000088960)
13:05:17  	/go/src/k8s.io/helm/cmd/tiller/tiller.go:204 +0x114
13:05:17  created by main.start
13:05:17  	/go/src/k8s.io/helm/cmd/tiller/tiller.go:200 +0xacf

This is using Tiller outside the cluster also

@2ZZ
Copy link

2ZZ commented Jul 9, 2019

Still getting this on Tiller 2.14.0, although very rarely

14:27:57  [main] 2019/07/09 13:27:56 Starting Tiller v2.14.0 (tls=false)
14:27:57  [main] 2019/07/09 13:27:56 GRPC listening on 127.0.0.1:44134
14:27:57  [main] 2019/07/09 13:27:56 Probes listening on :44135
14:27:57  [main] 2019/07/09 13:27:56 Storage driver is Secret
14:27:57  [main] 2019/07/09 13:27:56 Max history per release is 0
14:27:57  fatal error: concurrent map iteration and map write
14:27:57  
14:27:57  goroutine 34 [running]:
14:27:57  runtime.throw(0x1948292, 0x26)
14:27:57  	/usr/local/go/src/runtime/panic.go:617 +0x72 fp=0xc000526bd0 sp=0xc000526ba0 pc=0x42d942
14:27:57  runtime.mapiternext(0xc000526df8)
...

@vbehar
Copy link

vbehar commented Jul 19, 2019

same here with helm 2.14.1:

[main] 2019/07/19 10:15:08 Starting Tiller v2.14.1 (tls=false)
[main] 2019/07/19 10:15:08 GRPC listening on 127.0.0.1:44004
[main] 2019/07/19 10:15:08 Probes listening on 127.0.0.1:44014
[main] 2019/07/19 10:15:08 Storage driver is Secret
[main] 2019/07/19 10:15:08 Max history per release is 20

fatal error: concurrent map iteration and map write

goroutine 36 [running]:
runtime.throw(0x1949292, 0x26)
	/usr/local/go/src/runtime/panic.go:617 +0x72 fp=0xc000712bb0 sp=0xc000712b80 pc=0x42d942
runtime.mapiternext(0xc000712df8)
	/usr/local/go/src/runtime/map.go:860 +0x597 fp=0xc000712c38 sp=0xc000712bb0 pc=0x40eff7
runtime.mapiterinit(0x16f4d40, 0xc0006e66c0, 0xc000712df8)
	/usr/local/go/src/runtime/map.go:850 +0x1c9 fp=0xc000712c58 sp=0xc000712c38 pc=0x40e969
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).GetServiceInfo(0xc000489500, 0xc000056500)
	/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:452 +0x96 fp=0xc000712e68 sp=0xc000712c58 pc=0x8a94b6
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).InitializeMetrics(0xc00019c2d0, 0xc000489500)
	/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.go:109 +0x43 fp=0xc000712f88 sp=0xc000712e68 pc=0x8c2943
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.Register(...)
	/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.go:38
main.start.func2(0xc00004ccc0)
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:231 +0x4c fp=0xc000712fd8 sp=0xc000712f88 pc=0x158a6ac
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc000712fe0 sp=0xc000712fd8 pc=0x45c9e1
created by main.start
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:227 +0xb4a

goroutine 1 [select]:
main.start()
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:241 +0xc31
main.main()
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:124 +0x163

goroutine 21 [chan receive]:
k8s.io/helm/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x2ce91a0)
	/go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:943 +0x8b
created by k8s.io/helm/vendor/k8s.io/klog.init.0
	/go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:403 +0x6c

goroutine 14 [syscall]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:139 +0x9c
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:29 +0x41

goroutine 35 [runnable]:
net.(*TCPListener).Accept(0xc00012ee78, 0x19d0388, 0xc000489500, 0xc00071e000, 0x20)
	/usr/local/go/src/net/tcpsock.go:256 +0x207
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).Serve(0xc000489500, 0x1bbaee0, 0xc00012ee78, 0x0, 0x0)
	/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:556 +0x1e9
main.start.func1(0xc0000f9e00, 0x1bbaee0, 0xc00012ee78, 0xc00004cc60)
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:222 +0x121
created by main.start
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:218 +0xb25

@ddrechse
Copy link

same here with 2.14.2

fatal error: concurrent map iteration and map write

( goroutine 27 [running]:
runtime.throw(0x19424ce, 0x26)
/usr/lib/golang/src/runtime/panic.go:617 +0x72 fp=0xc00001abd0 sp=0xc00001aba0 pc=0x42d502
runtime.mapiternext(0xc00001adf8)
/usr/lib/golang/src/runtime/map.go:860 +0x520 fp=0xc00001ac58 sp=0xc00001abd0 pc=0x40ee90
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).GetServiceInfo(0xc000475680, 0xc00004e500)
/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:452 +0x3eb fp=0xc00001ae68 sp=0xc00001ac58 pc=0x8a55eb
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).InitializeMetrics(0xc0002022d0, 0xc000475680)
/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.go:109 +0x43 fp=0xc00001af88 sp=0xc00001ae68 pc=0x8be613
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.Register(...)
/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.go:38
main.start.func2(0xc000217ce0)
/go/src/k8s.io/helm/cmd/tiller/tiller.go:231 +0x4c fp=0xc00001afd8 sp=0xc00001af88 pc=0x1583f9c
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc00001afe0 sp=0xc00001afd8 pc=0x45c2d1
created by main.start
/go/src/k8s.io/helm/cmd/tiller/tiller.go:227 +0xb4a

goroutine 1 [select]:
main.start()
/go/src/k8s.io/helm/cmd/tiller/tiller.go:241 +0xc31
main.main()
/go/src/k8s.io/helm/cmd/tiller/tiller.go:124 +0x163

@ipleten
Copy link

ipleten commented Mar 16, 2020

We have exact the same issue.
We are using 'tiller-less' plugin for helm2 https://github.com/rimusz/helm-tiller
to reproduce we run in loop:
helm tiller run ping-services -- bash -c 'echo running helm; helm list'

@bacongobbler
Copy link
Member

closing as inactive.

@tkbrex
Copy link

tkbrex commented Jun 9, 2020

I have exactly the same error message.


[main] 2020/06/09 16:19:38 Starting Tiller v2.14.3 (tls=false)
--
  | [main] 2020/06/09 16:19:38 GRPC listening on :44134
  | [main] 2020/06/09 16:19:38 Probes listening on :44135
  | [main] 2020/06/09 16:19:38 Storage driver is Memory
  | [main] 2020/06/09 16:19:38 Max history per release is 0
  | fatal error: concurrent map iteration and map write
  |  
  | goroutine 15 [running]:
  | runtime.throw(0x194952e, 0x26)
  | /usr/local/go/src/runtime/panic.go:617 +0x72 fp=0xc000294bb0 sp=0xc000294b80 pc=0x42d942
  | runtime.mapiternext(0xc000294df8)
  | /usr/local/go/src/runtime/map.go:860 +0x597 fp=0xc000294c38 sp=0xc000294bb0 pc=0x40eff7
  | runtime.mapiterinit(0x16f4dc0, 0xc0002e4330, 0xc000294df8)
  | /usr/local/go/src/runtime/map.go:850 +0x1c9 fp=0xc000294c58 sp=0xc000294c38 pc=0x40e969
  | k8s.io/helm/vendor/google.golang.org/grpc.(*Server).GetServiceInfo(0xc000472900, 0xc00004c000)
  | /go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:452 +0x96 fp=0xc000294e68 sp=0xc000294c58 pc=0x8a94b6
  | k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).InitializeMetrics(0xc0001ae2d0, 0xc000472900)
  | /go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.go:109 +0x43 fp=0xc000294f88 sp=0xc000294e68 pc=0x8c2943
  | k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.Register(...)
  | /go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.go:38
  | main.start.func2(0xc0000ac7e0)
  | /go/src/k8s.io/helm/cmd/tiller/tiller.go:231 +0x4c fp=0xc000294fd8 sp=0xc000294f88 pc=0x158b1cc
  | runtime.goexit()
  | /usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc000294fe0 sp=0xc000294fd8 pc=0x45c9e1
  | created by main.start
  | /go/src/k8s.io/helm/cmd/tiller/tiller.go:227 +0xb4a
  |  
  | goroutine 1 [select]:
  | main.start()
  | /go/src/k8s.io/helm/cmd/tiller/tiller.go:241 +0xc31
  | main.main()
  | /go/src/k8s.io/helm/cmd/tiller/tiller.go:124 +0x163
  |  
  | goroutine 20 [chan receive]:
  | k8s.io/helm/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x2ce91a0)
  | /go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:943 +0x8b
  | created by k8s.io/helm/vendor/k8s.io/klog.init.0
  | /go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:403 +0x6c
  |  
  | goroutine 12 [syscall]:
  | os/signal.signal_recv(0x0)
  | /usr/local/go/src/runtime/sigqueue.go:139 +0x9c
  | os/signal.loop()
  | /usr/local/go/src/os/signal/signal_unix.go:23 +0x22
  | created by os/signal.init.0
  | /usr/local/go/src/os/signal/signal_unix.go:29 +0x41
  |  
  | goroutine 14 [runnable]:
  | internal/poll.(*pollDesc).wait(0xc0002aed98, 0x72, 0x0, 0x0, 0x191da00)
  | /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x16b
  | internal/poll.(*pollDesc).waitRead(...)
  | /usr/local/go/src/internal/poll/fd_poll_runtime.go:92
  | internal/poll.(*FD).Accept(0xc0002aed80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
  | /usr/local/go/src/internal/poll/fd_unix.go:384 +0x1ba
  | net.(*netFD).accept(0xc0002aed80, 0xc00045e6f8, 0x458f90, 0xb800000000000010)
  | /usr/local/go/src/net/fd_unix.go:238 +0x42
  | net.(*TCPListener).accept(0xc00014c758, 0x42c6bf, 0xc0002de0b0, 0xc00045e678)
  | /usr/local/go/src/net/tcpsock_posix.go:139 +0x32
  | net.(*TCPListener).Accept(0xc00014c758, 0x19d06b8, 0xc000472900, 0xc0003cc040, 0x20)
  | /usr/local/go/src/net/tcpsock.go:260 +0x48
  | k8s.io/helm/vendor/google.golang.org/grpc.(*Server).Serve(0xc000472900, 0x1bbb360, 0xc00014c758, 0x0, 0x0)
  | /go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:556 +0x1e9
  | main.start.func1(0xc0000f8000, 0x1bbb360, 0xc00014c758, 0xc0000ac6c0)
  | /go/src/k8s.io/helm/cmd/tiller/tiller.go:222 +0x121
  | created by main.start
  | /go/src/k8s.io/helm/cmd/tiller/tiller.go:218 +0xb25


@keeganwitt
Copy link

It's rare, but I ran into this today as well. Attaching my stacktrace since my version and line numbers are a bit different than others already posted.

fatal error: concurrent map iteration and map write
goroutine 26 [running]:
runtime.throw(0x19d14ee, 0x26)
	/usr/local/go/src/runtime/panic.go:774 +0x72 fp=0xc00060cbd0 sp=0xc00060cba0 pc=0x42f522
runtime.mapiternext(0xc00060cdf8)
	/usr/local/go/src/runtime/map.go:858 +0x579 fp=0xc00060cc58 sp=0xc00060cbd0 pc=0x40f4d9
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).GetServiceInfo(0xc0001d8d80, 0xc0000676b0)
	/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:452 +0x3eb fp=0xc00060ce68 sp=0xc00060cc58 pc=0x8ca9bb
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).InitializeMetrics(0xc000176000, 0xc0001d8d80)
	/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.go:133 +0x43 fp=0xc00060cf88 sp=0xc00060ce68 pc=0x8e5cf3
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.Register(...)
	/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.go:38
main.start.func2(0xc00004c300)
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:238 +0x5b fp=0xc00060cfd8 sp=0xc00060cf88 pc=0x15eb7cb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00060cfe0 sp=0xc00060cfd8 pc=0x45eeb1
created by main.start
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:230 +0xa9e
goroutine 1 [select]:
main.start()
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:248 +0xb85
main.main()
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:125 +0x161
goroutine 33 [chan receive]:
k8s.io/helm/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x2b470a0)
	/go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:1018 +0x8b
created by k8s.io/helm/vendor/k8s.io/klog.init.0
	/go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:404 +0x6c
goroutine 15 [syscall]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:29 +0x41
goroutine 25 [runnable]:
internal/poll.(*pollDesc).wait(0xc0003b1018, 0x72, 0x0, 0x0, 0x19a0d93)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x16b
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc0003b1000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:384 +0x1f8
net.(*netFD).accept(0xc0003b1000, 0xc0004566f8, 0xc000424000, 0x1800000000000000)
	/usr/local/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc0001140c0, 0xc000424000, 0xc000424088, 0x42e2ca)
	/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0001140c0, 0xc0005756b0, 0xc0004566f8, 0xc000424088, 0xc0001d8e68)
	/usr/local/go/src/net/tcpsock.go:261 +0x47
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).Serve(0xc0001d8d80, 0x1c2eca0, 0xc0001140c0, 0x0, 0x0)
	/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:556 +0x22e
main.start.func1(0xc0000e0000, 0x1c2eca0, 0xc0001140c0, 0xc00004c240)
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:225 +0x120
created by main.start
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:221 +0xa79

This was with
helm v2.16.3
kubectl v1.15.5
K8s v1.14.9-eks-f459c0
and using Tiller outside cluster.

It didn't happen again when I retried the failed job.

@keeganwitt
Copy link

I'm not sure it was appropriate to close this issue. At least not with the reason given, as it was certainly not "inactive" at the time of closure.

@P1ng-W1n
Copy link

I would suggest to reopen this issue too, as it seems to affect version 2.16.7 too:

[main] 2020/07/31 07:29:04 Starting Tiller v2.16.7 (tls=false)
[main] 2020/07/31 07:29:04 GRPC listening on :44134
[main] 2020/07/31 07:29:04 Probes listening on :44135
[main] 2020/07/31 07:29:04 Storage driver is Secret
[main] 2020/07/31 07:29:04 Max history per release is 0
fatal error: concurrent map iteration and map write

goroutine 161 [running]:
runtime.throw(0x19d3933, 0x26)
	/usr/local/go/src/runtime/panic.go:774 +0x72 fp=0xc0006ccbd0 sp=0xc0006ccba0 pc=0x42f522
runtime.mapiternext(0xc0006ccdf8)
	/usr/local/go/src/runtime/map.go:858 +0x579 fp=0xc0006ccc58 sp=0xc0006ccbd0 pc=0x40f4d9
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).GetServiceInfo(0xc0004a3080, 0xc00045d6b0)
	/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:452 +0x3eb fp=0xc0006cce68 sp=0xc0006ccc58 pc=0x8ca9bb
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).InitializeMetrics(0xc000272000, 0xc0004a3080)
	/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.go:133 +0x43 fp=0xc0006ccf88 sp=0xc0006cce68 pc=0x8e5cf3
k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus.Register(...)
	/go/src/k8s.io/helm/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.go:38
main.start.func2(0xc000142ba0)
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:238 +0x5b fp=0xc0006ccfd8 sp=0xc0006ccf88 pc=0x15edbeb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0006ccfe0 sp=0xc0006ccfd8 pc=0x45eeb1
created by main.start
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:230 +0xa9e

goroutine 1 [select]:
main.start()
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:248 +0xb85
main.main()
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:125 +0x161

goroutine 11 [chan receive]:
k8s.io/helm/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x2b4b0a0)
	/go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:1018 +0x8b
created by k8s.io/helm/vendor/k8s.io/klog.init.0
	/go/src/k8s.io/helm/vendor/k8s.io/klog/klog.go:404 +0x6c

goroutine 14 [syscall]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:29 +0x41

goroutine 16 [runnable]:
syscall.Accept4(0x3, 0x80800, 0x7f427e225fff, 0x400, 0x7f427e051100, 0x20300000000000, 0x7f427e225fff)
	/usr/local/go/src/syscall/syscall_linux.go:538 +0x1af
internal/poll.accept(0x3, 0x72, 0x0, 0x0, 0x0, 0x7f427de263c0, 0x0, 0x7f427e051100)
	/usr/local/go/src/internal/poll/sock_cloexec.go:17 +0x3f
internal/poll.(*FD).Accept(0xc000411e00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:377 +0x113
net.(*netFD).accept(0xc000411e00, 0xc00045cef8, 0xc00022e0a0, 0x8500000000000000)
	/usr/local/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc0005137e0, 0xc00022e0a0, 0xc00022e128, 0x42e2ca)
	/usr/local/go/src/net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0005137e0, 0xc00050c390, 0xc00045cef8, 0xc00022e128, 0xc0004a3168)
	/usr/local/go/src/net/tcpsock.go:261 +0x47
k8s.io/helm/vendor/google.golang.org/grpc.(*Server).Serve(0xc0004a3080, 0x1c31360, 0xc0005137e0, 0x0, 0x0)
	/go/src/k8s.io/helm/vendor/google.golang.org/grpc/server.go:556 +0x22e
main.start.func1(0xc000240640, 0x1c31360, 0xc0005137e0, 0xc000142b40)
	/go/src/k8s.io/helm/cmd/tiller/tiller.go:225 +0x120
created by main.start

@bacongobbler bacongobbler reopened this Aug 6, 2020
@P1ng-W1n
Copy link

P1ng-W1n commented Aug 6, 2020

Thanks for reopening this issue @bacongobbler .
As this has been closed due to inactivity, so I would like to avoid the same to happen this time. Is there anything we could contribute to the investigation besides to the logs we already provided?

@bacongobbler bacongobbler added the v2.x Issues and Pull Requests related to the major version v2 label Aug 6, 2020
@bacongobbler
Copy link
Member

Not sure. Please submit bug fixes when you identify the issue causing this symptom.

Gentle reminder that Helm 2 will no longer accept bug fixes after August 13th, so if you identify a bug, please submit a patch soon before the patch window has closed. See the blog post for more details: https://helm.sh/blog/covid-19-extending-helm-v2-bug-fixes/

@bacongobbler
Copy link
Member

We are no longer accepting bug fixes for Helm 2. See https://helm.sh/blog/helm-v2-deprecation-timeline/ for more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Categorizes issue or PR as related to a bug. unconfirmed v2.x Issues and Pull Requests related to the major version v2
Projects
None yet
Development

No branches or pull requests

10 participants