New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 2090685: Cache object storer for subsequent uploads #40
Bug 2090685: Cache object storer for subsequent uploads #40
Conversation
Currently for each s3 profile that the PVs need to be uploaded to, we connect to the s3 store for each volume that is protected by the current instance of VRG. This causes a 2 minute slowdown in case the s3 store is unreachable, which can happen during failover cases. For a workload with n PVCs as a result the failover takes n*120 seconds, which is far too long. This is fixed by connecting to the s3 profile once per VRG reconciliation, and using the cached connection/storer to perform subsequent uploads or error out immediately. Alternative: was to upload PVs post creating VRs for the same but that would still reconcile the current VRG for n*120 seconds in the second loop, hence is dropped as an alternative. Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
@ShyamsundarR: This pull request references Bugzilla bug 2089797, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
||
// cache the objectStore (or the error) | ||
v.objectStorers[s3ProfileName] = CachedObjectStorer{ | ||
storer: objectStore, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this objectstore encounters an error then it is doomed forever, right? Once it is added with an error, we will always return it in line 1510 with the same error. No retry. I guess all that's need to fix it is to return the err in line 1521
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is true for this instance of VRG, the next reconcile will get a new VRGInstance, where we will try to connect once more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which also makes me wonder why we do not close the sessions we open with the object store? (different issue, but was checking based on this comment if we were closing the connection which would cause other failures)
@raghavendrabhat: changing LGTM is restricted to collaborators In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@BenamarMk: changing LGTM is restricted to collaborators In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: BenamarMk, raghavendrabhat, ShyamsundarR The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@ShyamsundarR: This pull request references Bugzilla bug 2090685, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@openshift-ci[bot]: GitHub didn't allow me to request PR reviews from the following users: keesturam. Note that only red-hat-storage members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/bugzilla refresh |
@ShyamsundarR: This pull request references Bugzilla bug 2090685, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@openshift-ci[bot]: GitHub didn't allow me to request PR reviews from the following users: keesturam. Note that only red-hat-storage members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@ShyamsundarR: All pull requests linked via external trackers have merged: Bugzilla bug 2090685 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Currently for each s3 profile that the PVs need to be
uploaded to, we connect to the s3 store for each volume
that is protected by the current instance of VRG.
This causes a 2 minute slowdown in case the s3 store is
unreachable, which can happen during failover cases.
For a workload with n PVCs as a result the failover takes
n*120 seconds, which is far too long.
This is fixed by connecting to the s3 profile once per VRG
reconciliation, and using the cached connection/storer to
perform subsequent uploads or error out immediately.
Alternative: was to upload PVs post creating VRs for the same
but that would still reconcile the current VRG for n*120
seconds in the second loop, hence is dropped as an
alternative.
Signed-off-by: Shyamsundar Ranganathan srangana@redhat.com