New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-29479: Add retries to async cache initialization #13610
OCPBUGS-29479: Add retries to async cache initialization #13610
Conversation
Auth initialization can fail if the API server not ready yet. This is especially common during cluster install.
Skipping CI for Draft Pull Request. |
@TheRealJon: This pull request references Jira Issue OCPBUGS-29479, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
pkg/auth/asynccache.go
Outdated
@@ -10,6 +10,11 @@ import ( | |||
"k8s.io/klog" | |||
) | |||
|
|||
const ( | |||
initializationRetries = 10 | |||
initializationRetryDelay = 30 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you wan to be under the 150 liveness probe delay, which I think means you want this delay to 11 seconds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See openshift/console-operator#869 which changes the probes for the console container.
pkg/auth/asynccache.go
Outdated
return c, nil | ||
} | ||
klog.V(4).Infof("retrying async cache setup - attempt %v of %v", retries, initializationRetries) | ||
time.Sleep(initializationRetryDelay) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
time.Sleep(initializationRetryDelay) | |
select{ | |
case <- ctx.Done(): | |
return nil, ctx.Error() | |
case <-time.After(initializationRetryDelay): | |
} |
perhaps this, so that cancellation is honored?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace the logic with k8s.io/apimachinery/pkg/util/wait.UntilWithContext()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pkg/auth/asynccache.go
Outdated
return c, nil | ||
} | ||
klog.V(4).Infof("retrying async cache setup - attempt %v of %v", retries, initializationRetries) | ||
time.Sleep(initializationRetryDelay) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace the logic with k8s.io/apimachinery/pkg/util/wait.UntilWithContext()
…op to retry auth async cache setup
pkg/auth/asynccache.go
Outdated
@@ -27,13 +31,21 @@ func NewAsyncCache[T any](ctx context.Context, reloadPeriod time.Duration, cachi | |||
cachingFunc: cachingFunc, | |||
} | |||
|
|||
item, err := cachingFunc(ctx) | |||
var err error | |||
ctx, cancel := context.WithTimeout(ctx, 5*time.Minute) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect something in terms of seconds, not minutes 👀
Remember that this blocks proper startup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is intentionally long in order to give the API server time to become available. This context gets canceled on the first successful cachingFunc
call, so we only wait the full 5 minutes when there is something wrong.
pkg/auth/asynccache.go
Outdated
} | ||
c.cachedItem = item | ||
cancel() | ||
}, initializationRetryDelay) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the delay const is used in place of a check interval here
/label tide/merge-method-squash |
/retest |
pkg/auth/asynccache.go
Outdated
@@ -10,6 +10,12 @@ import ( | |||
"k8s.io/klog" | |||
) | |||
|
|||
const ( | |||
initializationRetryInterval = 30 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about 10s to match the historical behavior:
Line 182 in 057e732
backoff = time.Second * 10 |
@@ -10,6 +10,12 @@ import ( | |||
"k8s.io/klog" | |||
) | |||
|
|||
const ( | |||
initializationRetryInterval = 30 * time.Second | |||
initializationTimeout = 5 * time.Minute |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@deads2k is this ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Log error on each retry - Do not return error from PollUntilContextTimeout condition func - Reduce retry interval to 10 seconds to match historical behavior
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jhadvig, TheRealJon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
@TheRealJon: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@TheRealJon: Jira Issue OCPBUGS-29479: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-29479 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[ART PR BUILD NOTIFIER] This PR has been included in build openshift-enterprise-console-container-v4.16.0-202402282339.p0.g0a4e4b1.assembly.stream.el8 for distgit openshift-enterprise-console. |
/cherry-pick release-4.15 |
@TheRealJon: new pull request created: #13645 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Fix included in accepted release 4.16.0-0.nightly-2024-03-30-033956 |
Auth initialization can fail if the API server is not ready. This is especially common during cluster installation.
Thanks to @deads2k for doing the sleuth work on this.