-
Notifications
You must be signed in to change notification settings - Fork 2.1k
fix: Sync CRD informer caches for CRS #2672
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
/lgtm would prefer if we had another pair of eyes on this. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mrueg, rexagod The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @dgrisonnet |
internal/discovery/discovery.go
Outdated
go informer.Run(stopper) | ||
|
||
// Wait for the cache to sync. | ||
if !cache.WaitForCacheSync(ctx.Done(), informer.HasSynced) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how having an up-to-date informer cache would help here.
The scenario where a CRD is deleted seconds after the discoverer started creating a reflector for it will still happen even if the CRD is synced because the race seem to appear from two different and asynchronous operations:
- Discoverer creating a reflector for a CRD
- User deleting a CRD
I wonder if this is really an issue in real world scenarios as I wouldn't expect CRD to be short-lived. Maybe it could impact some tests, but I'd expect the test to wait until it has seen the metrics before deleting the CRD
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The operation need not be transient.
Essentially, the sync isn't respected in any of the three events, but made visible in the delete clause since the reflector, which is still running based off the store implementation for that CRD, cannot list the CRD anymore as it is not made aware just as that's deleted, which is frequently transient, causing it to log this error, as well as increment the list_total{result="error"}
signal.
I'll add a test to demonstrate the before and after cases deterministically to contribute to the overall confidence of this patch.
/triage accepted |
New changes are detected. LGTM label has been removed. |
bcf4dd9
to
d6dd6c0
Compare
11af1e7
to
2d79f82
Compare
Seconding @dgrisonnet, I realized syncing caches at the first run isn't going to help us do that throughout the lifecycle. Resyncs weren't an option either since the event could always occur before that timeout, causing the same error. I dropped the event handlers and instead relied on indexers (latest patch, albeit a bit rough; pushed for visibility) but upon digging into cgo, it seems the issue doesn't stem from the kubernetes/kubernetes#79610 is still open and tracking this particular flaw in cgo. For now, I'll work on the alternative client-based (non-dynamic) informer solution. |
Signed-off-by: Pranshu Srivastava <rexagod@gmail.com>
With this patch, CRD deletions don't make CRS throw LW errors from the respective CRD's reflector anymore.
I noticed this way back, but never got around to actually patching it. That is, until a certain request downstream made me ponder it again, for which we currently have a workaround in the parent operator, which we would be better off without. Also, this workaround would more-or-less be what anyone would have to do to prevent CRS from false alerting where there's a cache race.
This reduces our urgent need for 1 to go in, as it addresses the "failed to list/watch resource" errors from client-go's
tools/cache/reflector.go
owing to which, in addition to error logs, we also sawkube_state_metrics_{list,watch}{result="error"}
increment, leading to false alerts in the metrics backend.Note that we still need control over client-go's error machinery, so I'll keep that issue open. See 2 for more details about this change.
To reproduce this, run KSM with the CRS configuration pointing to a CRD, deploy the CRD and CR, then delete them. Usually (not 100% of the time, as this is a race, but pretty frequently) this will log a "failed to watch" error, and
watch_total{result="error"}
will be incremented as well.Update: The patch now consists of a single commit that cleans up reflectors for CRs for which the CRDs have been deleted, thus avoiding a memory leak. This additionally fixes the aforementioned problem statement as well.