Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 2095756: client: register types during init, not later #1483

Merged
merged 2 commits into from
Jun 13, 2022

Conversation

dcbw
Copy link
Contributor

@dcbw dcbw commented Jun 10, 2022

Especially not later when leaderelection or other things might be accessing the global Scheme object that reads from registered types, which can cause concurrent map access and panic.

    W0610 12:18:46.059490       1 cmd.go:213] Using insecure, self-signed certificates
    I0610 12:18:46.770039       1 observer_polling.go:159] Starting file observer
    W0610 12:18:46.793003       1 builder.go:230] unable to get owner reference (falling back to namespace): pods "cluster-network-operator-6b644ff8b9-zvxnq" not found
    I0610 12:18:46.793128       1 builder.go:262] network-operator version v0.0.0-unknown-337cf37c
    W0610 12:18:47.181014       1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.
    W0610 12:18:47.181132       1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.
    I0610 12:18:47.185138       1 leaderelection.go:248] attempting to acquire leader lease openshift-network-operator/network-operator-lock...
    I0610 12:18:47.189027       1 secure_serving.go:210] Serving securely on [::]:9104
    I0610 12:18:47.189164       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
    I0610 12:18:47.189224       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
    I0610 12:18:47.189291       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-4192405082/tls.crt::/tmp/serving-cert-4192405082/tls.key"
    I0610 12:18:47.192612       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
    I0610 12:18:47.193241       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
    I0610 12:18:47.193303       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0610 12:18:47.193348       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
    I0610 12:18:47.193391       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
    I0610 12:18:47.214640       1 leaderelection.go:258] successfully acquired lease openshift-network-operator/network-operator-lock
    I0610 12:18:47.215335       1 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"7ea5cf6e-f663-44c0-9923-1d202e524413", APIVersion:"v1", ResourceVersion:"13861", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-network-operator-6b644ff8b9-zvxnq_a5e8aea0-63e2-44a4-bd8c-09e67ad883f3 became leader
    I0610 12:18:47.215369       1 event.go:285] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"e0717cb3-86d5-4f73-83d6-085ea641bbe3", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"13862", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-network-operator-6b644ff8b9-zvxnq_a5e8aea0-63e2-44a4-bd8c-09e67ad883f3 became leader
    fatal error: concurrent map read and map write
    
    goroutine 1 [running]:
    runtime.throw({0x257d4f7?, 0x411a9d?})
    	runtime/panic.go:992 +0x71 fp=0xc000b667a8 sp=0xc000b66778 pc=0x43ed11
    runtime.mapaccess2(0x22d2700?, 0xc000e1ced0?, 0xc000e1ced0?)
    	runtime/map.go:476 +0x205 fp=0xc000b667e8 sp=0xc000b667a8 pc=0x4152e5
    k8s.io/apimachinery/pkg/runtime.(*Scheme).ObjectKinds(0xc000230700, {0x292fec0?, 0xc000e1ced0})
    	k8s.io/apimachinery@v0.24.0/pkg/runtime/scheme.go:263 +0xc5 fp=0xc000b668d8 sp=0xc000b667e8 pc=0x8ecc85
    k8s.io/apimachinery/pkg/runtime.(*parameterCodec).EncodeParameters(0xc000159280, {0x292fec0, 0xc000e1ced0}, {{0x255dd0a?, 0x1?}, {0x2541e58?, 0x18e9b26e00?}})
    	k8s.io/apimachinery@v0.24.0/pkg/runtime/codec.go:191 +0x72 fp=0xc000b669b0 sp=0xc000b668d8 pc=0x8d9572
    k8s.io/client-go/rest.(*Request).SpecificallyVersionedParams(0xc000986a00, {0x292fec0?, 0xc000e1ced0?}, {0x2930230?, 0xc000159280?}, {{0x255dd0a?, 0x413ba5?}, {0x2541e58?, 0x20?}})
    	k8s.io/client-go@v0.24.0/rest/request.go:372 +0x83 fp=0xc000b66a98 sp=0xc000b669b0 pc=0x11af743
    k8s.io/client-go/rest.(*Request).VersionedParams(...)
    	k8s.io/client-go@v0.24.0/rest/request.go:365
    k8s.io/client-go/kubernetes/typed/coordination/v1.(*leases).Get(0xc000bef360, {0x2949e08, 0xc000469b00}, {0xc00005a498, 0x15}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}})
    	k8s.io/client-go@v0.24.0/kubernetes/typed/coordination/v1/lease.go:77 +0x145 fp=0xc000b66ba0 sp=0xc000b66a98 pc=0x143b305
    k8s.io/client-go/tools/leaderelection/resourcelock.(*LeaseLock).Get(0xc00063f040, {0x2949e08, 0xc000469b00})
    	k8s.io/client-go@v0.24.0/tools/leaderelection/resourcelock/leaselock.go:43 +0x91 fp=0xc000b66c18 sp=0xc000b66ba0 pc=0x16a30b1
    k8s.io/client-go/tools/leaderelection/resourcelock.(*MultiLock).Update(0xc00057c420, {0x2949e08, 0xc000469b00}, {{0xc000054e60, 0x4e}, 0x89, {{0x0, 0xeda352da7, 0x3ce9b20}}, {{0xc0a0eb89cccba7da, ...}}, ...})
    	k8s.io/client-go@v0.24.0/tools/leaderelection/resourcelock/multilock.go:78 +0x8f fp=0xc000b66c90 sp=0xc000b66c18 pc=0x16a40ef
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).tryAcquireOrRenew(0xc00072e000, {0x2949e08, 0xc000469b00})
    	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:366 +0x40e fp=0xc000b66dd0 sp=0xc000b66c90 pc=0x16a612e
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1.1()
    	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:272 +0x25 fp=0xc000b66df8 sp=0xc000b66dd0 pc=0x16a5ae5
    k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000100000})
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:220 +0x1b fp=0xc000b66e08 sp=0xc000b66df8 pc=0x117b1db
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x2949d98?, 0xc000066640?}, 0xc000a8ee78?)
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:233 +0x57 fp=0xc000b66e48 sp=0xc000b66e08 pc=0x117b2b7
    k8s.io/apimachinery/pkg/util/wait.poll({0x2949d98, 0xc000066640}, 0xb8?, 0x117b165?, 0xc0001dfe90?)
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:580 +0x38 fp=0xc000b66e88 sp=0xc000b66e48 pc=0x117c258
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x2949d98, 0xc000066640}, 0x20?, 0xc000100000?)
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:545 +0x49 fp=0xc000b66ec8 sp=0xc000b66e88 pc=0x117c0a9
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000469b00?, 0xc000066600?, 0x8ecd7e2ef?)
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:536 +0x7c fp=0xc000b66f38 sp=0xc000b66ec8 pc=0x117bfdc
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1()
    	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:271 +0x10d fp=0xc000b67010 sp=0xc000b66f38 pc=0x16a580d
    k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x3d1ae34?)
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:155 +0x3e fp=0xc000b67030 sp=0xc000b67010 pc=0x117afbe
    k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005faf00?, {0x2923f00, 0xc0007b6b10}, 0x1, 0xc0005faf00)
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:156 +0xb6 fp=0xc000b670b0 sp=0xc000b67030 pc=0x117ae56
    k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000066600?, 0x60db88400, 0x0, 0x20?, 0x7fe5d16ce5b8?)
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:133 +0x89 fp=0xc000b67100 sp=0xc000b670b0 pc=0x117ad49
    k8s.io/apimachinery/pkg/util/wait.Until(...)
    	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:90
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew(0xc00072e000, {0x2949d98?, 0xc0000665c0?})
    	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:268 +0xd0 fp=0xc000b67188 sp=0xc000b67100 pc=0x16a5690
    k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc00072e000, {0x2949d98, 0xc000332040})
    	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:212 +0x12f fp=0xc000b671f8 sp=0xc000b67188 pc=0x16a4def
    k8s.io/client-go/tools/leaderelection.RunOrDie({0x2949d98, 0xc000332040}, {{0x294da30, 0xc00057c420}, 0x1fe5d61a00, 0x18e9b26e00, 0x60db88400, {0xc00057c440, 0x26e5170, 0x0}, ...})
    	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:226 +0x94 fp=0xc000b67270 sp=0xc000b671f8 pc=0x16a4fb4
    github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerBuilder).Run(0xc00063b0e0, {0x2949d98?, 0xc000332040}, 0x0)
    	github.com/openshift/library-go@v0.0.0-20220525173854-9b950a41acdc/pkg/controller/controllercmd/builder.go:342 +0x1568 fp=0xc000b67830 sp=0xc000b67270 pc=0x1eb3788
    github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerCommandConfig).StartController(0xc0000f46c0, {0x2949d98?, 0xc00040b600})
    	github.com/openshift/library-go@v0.0.0-20220525173854-9b950a41acdc/pkg/controller/controllercmd/cmd.go:294 +0x625 fp=0xc000b67af0 sp=0xc000b67830 pc=0x1eb69c5
    github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerCommandConfig).NewCommandWithContext.func1(0xc0004c8280?, {0x2542dc8?, 0x4?, 0x4?})
    	github.com/openshift/library-go@v0.0.0-20220525173854-9b950a41acdc/pkg/controller/controllercmd/cmd.go:137 +0x3e6 fp=0xc000b67d48 sp=0xc000b67af0 pc=0x1eb4d26
    github.com/spf13/cobra.(*Command).execute(0xc0004c8280, {0xc0005200c0, 0x4, 0x4})
    	github.com/spf13/cobra@v1.4.0/command.go:860 +0x663 fp=0xc000b67e20 sp=0xc000b67d48 pc=0x131e963
    github.com/spf13/cobra.(*Command).ExecuteC(0xc0004c8000)
    	github.com/spf13/cobra@v1.4.0/command.go:974 +0x3b4 fp=0xc000b67ed8 sp=0xc000b67e20 pc=0x131eff4
    github.com/spf13/cobra.(*Command).Execute(...)
    	github.com/spf13/cobra@v1.4.0/command.go:902
    main.main()
    	github.com/openshift/cluster-network-operator/cmd/cluster-network-operator/main.go:65 +0x2b2 fp=0xc000b67f80 sp=0xc000b67ed8 pc=0x1ed9872
    runtime.main()
    	runtime/proc.go:250 +0x212 fp=0xc000b67fe0 sp=0xc000b67f80 pc=0x4414f2
    runtime.goexit()
    	runtime/asm_amd64.s:1571 +0x1 fp=0xc000b67fe8 sp=0xc000b67fe0 pc=0x471d01

It's all handled by openshift/library-go/pkg/controller these days.
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 10, 2022
Especially not later when leaderelection or other things might be
accessing the global Scheme object that reads from registered types,
which can cause concurrent map access and panic.

https://bugzilla.redhat.com/show_bug.cgi?id=2095756

W0610 12:18:46.059490       1 cmd.go:213] Using insecure, self-signed certificates
I0610 12:18:46.770039       1 observer_polling.go:159] Starting file observer
W0610 12:18:46.793003       1 builder.go:230] unable to get owner reference (falling back to namespace): pods "cluster-network-operator-6b644ff8b9-zvxnq" not found
I0610 12:18:46.793128       1 builder.go:262] network-operator version v0.0.0-unknown-337cf37c
W0610 12:18:47.181014       1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.
W0610 12:18:47.181132       1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.
I0610 12:18:47.185138       1 leaderelection.go:248] attempting to acquire leader lease openshift-network-operator/network-operator-lock...
I0610 12:18:47.189027       1 secure_serving.go:210] Serving securely on [::]:9104
I0610 12:18:47.189164       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0610 12:18:47.189224       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0610 12:18:47.189291       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-4192405082/tls.crt::/tmp/serving-cert-4192405082/tls.key"
I0610 12:18:47.192612       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0610 12:18:47.193241       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0610 12:18:47.193303       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0610 12:18:47.193348       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0610 12:18:47.193391       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0610 12:18:47.214640       1 leaderelection.go:258] successfully acquired lease openshift-network-operator/network-operator-lock
I0610 12:18:47.215335       1 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"7ea5cf6e-f663-44c0-9923-1d202e524413", APIVersion:"v1", ResourceVersion:"13861", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-network-operator-6b644ff8b9-zvxnq_a5e8aea0-63e2-44a4-bd8c-09e67ad883f3 became leader
I0610 12:18:47.215369       1 event.go:285] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"e0717cb3-86d5-4f73-83d6-085ea641bbe3", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"13862", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-network-operator-6b644ff8b9-zvxnq_a5e8aea0-63e2-44a4-bd8c-09e67ad883f3 became leader
fatal error: concurrent map read and map write

goroutine 1 [running]:
runtime.throw({0x257d4f7?, 0x411a9d?})
	runtime/panic.go:992 +0x71 fp=0xc000b667a8 sp=0xc000b66778 pc=0x43ed11
runtime.mapaccess2(0x22d2700?, 0xc000e1ced0?, 0xc000e1ced0?)
	runtime/map.go:476 +0x205 fp=0xc000b667e8 sp=0xc000b667a8 pc=0x4152e5
k8s.io/apimachinery/pkg/runtime.(*Scheme).ObjectKinds(0xc000230700, {0x292fec0?, 0xc000e1ced0})
	k8s.io/apimachinery@v0.24.0/pkg/runtime/scheme.go:263 +0xc5 fp=0xc000b668d8 sp=0xc000b667e8 pc=0x8ecc85
k8s.io/apimachinery/pkg/runtime.(*parameterCodec).EncodeParameters(0xc000159280, {0x292fec0, 0xc000e1ced0}, {{0x255dd0a?, 0x1?}, {0x2541e58?, 0x18e9b26e00?}})
	k8s.io/apimachinery@v0.24.0/pkg/runtime/codec.go:191 +0x72 fp=0xc000b669b0 sp=0xc000b668d8 pc=0x8d9572
k8s.io/client-go/rest.(*Request).SpecificallyVersionedParams(0xc000986a00, {0x292fec0?, 0xc000e1ced0?}, {0x2930230?, 0xc000159280?}, {{0x255dd0a?, 0x413ba5?}, {0x2541e58?, 0x20?}})
	k8s.io/client-go@v0.24.0/rest/request.go:372 +0x83 fp=0xc000b66a98 sp=0xc000b669b0 pc=0x11af743
k8s.io/client-go/rest.(*Request).VersionedParams(...)
	k8s.io/client-go@v0.24.0/rest/request.go:365
k8s.io/client-go/kubernetes/typed/coordination/v1.(*leases).Get(0xc000bef360, {0x2949e08, 0xc000469b00}, {0xc00005a498, 0x15}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}})
	k8s.io/client-go@v0.24.0/kubernetes/typed/coordination/v1/lease.go:77 +0x145 fp=0xc000b66ba0 sp=0xc000b66a98 pc=0x143b305
k8s.io/client-go/tools/leaderelection/resourcelock.(*LeaseLock).Get(0xc00063f040, {0x2949e08, 0xc000469b00})
	k8s.io/client-go@v0.24.0/tools/leaderelection/resourcelock/leaselock.go:43 +0x91 fp=0xc000b66c18 sp=0xc000b66ba0 pc=0x16a30b1
k8s.io/client-go/tools/leaderelection/resourcelock.(*MultiLock).Update(0xc00057c420, {0x2949e08, 0xc000469b00}, {{0xc000054e60, 0x4e}, 0x89, {{0x0, 0xeda352da7, 0x3ce9b20}}, {{0xc0a0eb89cccba7da, ...}}, ...})
	k8s.io/client-go@v0.24.0/tools/leaderelection/resourcelock/multilock.go:78 +0x8f fp=0xc000b66c90 sp=0xc000b66c18 pc=0x16a40ef
k8s.io/client-go/tools/leaderelection.(*LeaderElector).tryAcquireOrRenew(0xc00072e000, {0x2949e08, 0xc000469b00})
	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:366 +0x40e fp=0xc000b66dd0 sp=0xc000b66c90 pc=0x16a612e
k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1.1()
	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:272 +0x25 fp=0xc000b66df8 sp=0xc000b66dd0 pc=0x16a5ae5
k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc000100000})
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:220 +0x1b fp=0xc000b66e08 sp=0xc000b66df8 pc=0x117b1db
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x2949d98?, 0xc000066640?}, 0xc000a8ee78?)
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:233 +0x57 fp=0xc000b66e48 sp=0xc000b66e08 pc=0x117b2b7
k8s.io/apimachinery/pkg/util/wait.poll({0x2949d98, 0xc000066640}, 0xb8?, 0x117b165?, 0xc0001dfe90?)
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:580 +0x38 fp=0xc000b66e88 sp=0xc000b66e48 pc=0x117c258
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x2949d98, 0xc000066640}, 0x20?, 0xc000100000?)
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:545 +0x49 fp=0xc000b66ec8 sp=0xc000b66e88 pc=0x117c0a9
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000469b00?, 0xc000066600?, 0x8ecd7e2ef?)
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:536 +0x7c fp=0xc000b66f38 sp=0xc000b66ec8 pc=0x117bfdc
k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew.func1()
	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:271 +0x10d fp=0xc000b67010 sp=0xc000b66f38 pc=0x16a580d
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x3d1ae34?)
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:155 +0x3e fp=0xc000b67030 sp=0xc000b67010 pc=0x117afbe
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005faf00?, {0x2923f00, 0xc0007b6b10}, 0x1, 0xc0005faf00)
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:156 +0xb6 fp=0xc000b670b0 sp=0xc000b67030 pc=0x117ae56
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000066600?, 0x60db88400, 0x0, 0x20?, 0x7fe5d16ce5b8?)
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:133 +0x89 fp=0xc000b67100 sp=0xc000b670b0 pc=0x117ad49
k8s.io/apimachinery/pkg/util/wait.Until(...)
	k8s.io/apimachinery@v0.24.0/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew(0xc00072e000, {0x2949d98?, 0xc0000665c0?})
	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:268 +0xd0 fp=0xc000b67188 sp=0xc000b67100 pc=0x16a5690
k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc00072e000, {0x2949d98, 0xc000332040})
	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:212 +0x12f fp=0xc000b671f8 sp=0xc000b67188 pc=0x16a4def
k8s.io/client-go/tools/leaderelection.RunOrDie({0x2949d98, 0xc000332040}, {{0x294da30, 0xc00057c420}, 0x1fe5d61a00, 0x18e9b26e00, 0x60db88400, {0xc00057c440, 0x26e5170, 0x0}, ...})
	k8s.io/client-go@v0.24.0/tools/leaderelection/leaderelection.go:226 +0x94 fp=0xc000b67270 sp=0xc000b671f8 pc=0x16a4fb4
github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerBuilder).Run(0xc00063b0e0, {0x2949d98?, 0xc000332040}, 0x0)
	github.com/openshift/library-go@v0.0.0-20220525173854-9b950a41acdc/pkg/controller/controllercmd/builder.go:342 +0x1568 fp=0xc000b67830 sp=0xc000b67270 pc=0x1eb3788
github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerCommandConfig).StartController(0xc0000f46c0, {0x2949d98?, 0xc00040b600})
	github.com/openshift/library-go@v0.0.0-20220525173854-9b950a41acdc/pkg/controller/controllercmd/cmd.go:294 +0x625 fp=0xc000b67af0 sp=0xc000b67830 pc=0x1eb69c5
github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerCommandConfig).NewCommandWithContext.func1(0xc0004c8280?, {0x2542dc8?, 0x4?, 0x4?})
	github.com/openshift/library-go@v0.0.0-20220525173854-9b950a41acdc/pkg/controller/controllercmd/cmd.go:137 +0x3e6 fp=0xc000b67d48 sp=0xc000b67af0 pc=0x1eb4d26
github.com/spf13/cobra.(*Command).execute(0xc0004c8280, {0xc0005200c0, 0x4, 0x4})
	github.com/spf13/cobra@v1.4.0/command.go:860 +0x663 fp=0xc000b67e20 sp=0xc000b67d48 pc=0x131e963
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004c8000)
	github.com/spf13/cobra@v1.4.0/command.go:974 +0x3b4 fp=0xc000b67ed8 sp=0xc000b67e20 pc=0x131eff4
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.4.0/command.go:902
main.main()
	github.com/openshift/cluster-network-operator/cmd/cluster-network-operator/main.go:65 +0x2b2 fp=0xc000b67f80 sp=0xc000b67ed8 pc=0x1ed9872
runtime.main()
	runtime/proc.go:250 +0x212 fp=0xc000b67fe0 sp=0xc000b67f80 pc=0x4414f2
runtime.goexit()
	runtime/asm_amd64.s:1571 +0x1 fp=0xc000b67fe8 sp=0xc000b67fe0 pc=0x471d01
@dcbw dcbw changed the title Fix scheme map access Bug 2095756: client: register types during init, not later Jun 10, 2022
@openshift-ci openshift-ci bot added bugzilla/severity-unspecified Referenced Bugzilla bug's severity is unspecified for the PR. bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Jun 10, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 10, 2022

@dcbw: This pull request references Bugzilla bug 2095756, which is invalid:

  • expected the bug to target the "4.11.0" release, but it targets "---" instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

Bug 2095756: client: register types during init, not later

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dcbw
Copy link
Contributor Author

dcbw commented Jun 10, 2022

/bugzilla refresh

@openshift-ci openshift-ci bot added bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. and removed bugzilla/severity-unspecified Referenced Bugzilla bug's severity is unspecified for the PR. bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Jun 10, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 10, 2022

@dcbw: This pull request references Bugzilla bug 2095756, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.11.0) matches configured target release for branch (4.11.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @anuragthehatter

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kyrtapz
Copy link
Contributor

kyrtapz commented Jun 13, 2022

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jun 13, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 13, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dcbw, kyrtapz

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 2 against base HEAD 337cf37 and 8 for PR HEAD 846041c in total

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 1 against base HEAD 337cf37 and 7 for PR HEAD 846041c in total

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 337cf37 and 6 for PR HEAD 846041c in total

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 13, 2022

@dcbw: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-windows 846041c link false /test e2e-aws-ovn-windows
ci/prow/e2e-aws-upgrade 846041c link false /test e2e-aws-upgrade
ci/prow/e2e-metal-ipi-ovn-ipv6-ipsec 846041c link false /test e2e-metal-ipi-ovn-ipv6-ipsec
ci/prow/e2e-azure-ovn 846041c link false /test e2e-azure-ovn

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@dcbw
Copy link
Contributor Author

dcbw commented Jun 13, 2022

/override ci/prow/e2e-metal-ipi-ovn-ipv6
metal is busted due to https://bugzilla.redhat.com/show_bug.cgi?id=2096226 and some other issues

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 13, 2022

@dcbw: Overrode contexts on behalf of dcbw: ci/prow/e2e-metal-ipi-ovn-ipv6

In response to this:

/override ci/prow/e2e-metal-ipi-ovn-ipv6
metal is busted due to https://bugzilla.redhat.com/show_bug.cgi?id=2096226 and some other issues

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot merged commit f1a70e4 into openshift:master Jun 13, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 13, 2022

@dcbw: All pull requests linked via external trackers have merged:

Bugzilla bug 2095756 has been moved to the MODIFIED state.

In response to this:

Bug 2095756: client: register types during init, not later

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants