Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tigera Operator Chart 3.26 BGPFilters is forbidden #7715

Closed
nick-oconnor opened this issue May 29, 2023 · 5 comments
Closed

Tigera Operator Chart 3.26 BGPFilters is forbidden #7715

nick-oconnor opened this issue May 29, 2023 · 5 comments
Assignees

Comments

@nick-oconnor
Copy link

nick-oconnor commented May 29, 2023

Context

After upgrading the tigera-operator helm chart to 3.26.0, pod cleanup fails. kube-controller-manager and calico-apiserver log:

reflector.go:424] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: connection is unauthorized: bgpfilters.crd.projectcalico.org is forbidden: User "system:serviceaccount:calico-apiserver:calico-apiserver" cannot list resource "bgpfilters" in API group "crd.projectcalico.org" at the cluster scope

and

status.go:71] apiserver received an error that is not an metav1.Status: errors.ErrorConnectionUnauthorized{Err:(*errors.StatusError)(0xc000dc4e60)}: connection is unauthorized: bgpfilters.crd.projectcalico.org is forbidden: User "system:serviceaccount:calico-apiserver:calico-apiserver" cannot list resource "bgpfilters" in API group "crd.projectcalico.org" at the cluster scope

respectively.

Seems like the calico-crds cluster role maybe missing the bgpfilters resource. Unfortunately I didn't get a chance to look before I rolled back. I'm unsure if this is related to #7598.

Rolling the chart back to 3.25.1 resolves the issue.

UPDATE: This appears to have been fixed in tigera/operator@792df2f, however the commit is not part of 1.30 tag. I've created 2675.

Your Environment

  • Tiger Operator version: 3.26.0
  • Orchestrator version (e.g. kubernetes, mesos, rkt): 1.26.5
  • Operating System and version: Ubuntu 22.04.2
@caseydavenport
Copy link
Member

Going to close this in favor of the other issue, thanks for raising!

@caseydavenport
Copy link
Member

Oh whoops, looks like the other is just a link to this so actually going to leave this open!

@sridhartigera sridhartigera self-assigned this May 30, 2023
@eimarfandino
Copy link

eimarfandino commented Jun 2, 2023

We recently updated from .25 to .26 and we are also encountering this issue. Weird enough a restart of the calico operator seems to solve it temporarily

@SISheogorath
Copy link

SISheogorath commented Jun 3, 2023

This can actually cause cluster-wide issues with garbage collection. I just had to investigate why garbage collection of e.g. cronjobs and Pods was broken on my cluster. Turns out, missing permissions are a bit of an issues for the garbage collector:

16:16:13.338340       1 graph_builder.go:281] garbage controller monitor not yet synced: projectcalico.org/v3, Resource=bgpfilters
…
E0603 16:27:03.048255       1 garbagecollector.go:250] timed out waiting for dependency graph builder sync during GC sync (attempt 43)

(Don't be thrown off by the timestamps I added the second log line afterwards, they appear in sync)

With this, the garbage collector will run into a timeout when trying to refresh its cached objects and therefore never run.

Edit 2023-06-08: I actually went and downgraded again, as the issue seems to persist even after running the newest version of the tigera operator (v1.30.2). I downgraded and removed the bgpfilter CRD to resolve the issue for good now.

@RohanArya4894
Copy link

Hi @caseydavenport, can we expect fix for this to be added in v3.27.0 of the helm chart ? thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants