We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Checklist:
What happened: Workflow Controller restart with: fatal error: concurrent map read and map write
What you expected to happen: The controller to stay running.
How to reproduce it (as minimally and precisely as possible): Not sure how to reproduce, it happens during upgrade from 2.9.0-rc4 to 2.9.1
2.9.1
1.17.8
fatal error: concurrent map read and map write goroutine 132 [running]: runtime.throw(0x197342d, 0x21) /usr/local/go/src/runtime/panic.go:774 +0x72 fp=0xc000815bc8 sp=0xc000815b98 pc=0x42e5d2 runtime.mapaccess1_faststr(0x170e600, 0xc000582300, 0xc0033c05f0, 0x4e, 0x194d89b) /usr/local/go/src/runtime/map_faststr.go:21 +0x44f fp=0xc000815c38 sp=0xc000815bc8 pc=0x41285f github.com/argoproj/argo/workflow/metrics.(*Metrics).WorkflowAdded(0xc00047d380, 0xc0033c05f0, 0x4e, 0xc001badc75, 0x9) /go/src/github.com/argoproj/argo/workflow/metrics/metrics.go:92 +0x56 fp=0xc000815c78 sp=0xc000815c38 pc=0x1495f56 github.com/argoproj/argo/workflow/controller.(*WorkflowController).addWorkflowInformerHandlers.func5(0x193f940, 0xc0005f787 0) /go/src/github.com/argoproj/argo/workflow/controller/controller.go:592 +0xbd fp=0xc000815cd0 sp=0xc000815c78 pc=0x1 5b4e6d k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...) /go/pkg/mod/k8s.io/client-go@v0.0.0-20191225075139-73fd2ddc9180/tools/cache/controller.go:195 k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnAdd(0xc000634880, 0x193f940, 0xc0005f7870) <autogenerated>:1 +0x5a fp=0xc000815cf0 sp=0xc000815cd0 pc=0x130b5da k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0xc00057bdc8, 0x4174be, 0x7f5376d7a5a0) /go/pkg/mod/k8s.io/client-go@v0.0.0-20191225075139-73fd2ddc9180/tools/cache/shared_informer.go:607 +0x218 fp=0xc000 815d68 sp=0xc000815cf0 pc=0x13093d8 k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc000815d d8, 0x0, 0xc00057bde8) /go/pkg/mod/k8s.io/apimachinery@v0.16.7-beta.0/pkg/util/wait/wait.go:292 +0x51 fp=0xc000815d90 sp=0xc000815d68 pc=0 x12c84b1 k8s.io/client-go/tools/cache.(*processorListener).run.func1() /go/pkg/mod/k8s.io/client-go@v0.0.0-20191225075139-73fd2ddc9180/tools/cache/shared_informer.go:601 +0x79 fp=0xc0008 15df8 sp=0xc000815d90 pc=0x1309499 k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00057bf40) /go/pkg/mod/k8s.io/apimachinery@v0.16.7-beta.0/pkg/util/wait/wait.go:152 +0x5e fp=0xc000815e68 sp=0xc000815df8 pc=0 x12c8b6e k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000815f40, 0xdf8475800, 0x0, 0x1, 0xc0004ec120) /go/pkg/mod/k8s.io/apimachinery@v0.16.7-beta.0/pkg/util/wait/wait.go:153 +0xf8 fp=0xc000815f18 sp=0xc000815e68 pc=0 x12c8108 k8s.io/apimachinery/pkg/util/wait.Until(...) /go/pkg/mod/k8s.io/apimachinery@v0.16.7-beta.0/pkg/util/wait/wait.go:88 k8s.io/client-go/tools/cache.(*processorListener).run(0xc0004f6e00) /go/pkg/mod/k8s.io/client-go@v0.0.0-20191225075139-73fd2ddc9180/tools/cache/shared_informer.go:599 +0x9b fp=0xc0008 15f68 sp=0xc000815f18 pc=0x13035db k8s.io/client-go/tools/cache.(*processorListener).run-fm() /go/pkg/mod/k8s.io/client-go@v0.0.0-20191225075139-73fd2ddc9180/tools/cache/shared_informer.go:593 +0x2a fp=0xc0008 15f80 sp=0xc000815f68 pc=0x130b2da
Message from the maintainers:
If you are impacted by this bug please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.
The text was updated successfully, but these errors were encountered:
Looks like this PR caused it: #3350
Sorry, something went wrong.
simster7
Successfully merging a pull request may close this issue.
Checklist:
What happened: Workflow Controller restart with: fatal error: concurrent map read and map write
What you expected to happen: The controller to stay running.
How to reproduce it (as minimally and precisely as possible): Not sure how to reproduce, it happens during upgrade from 2.9.0-rc4 to 2.9.1
Message from the maintainers:
If you are impacted by this bug please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.
The text was updated successfully, but these errors were encountered: