New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Add Pagination support to the S3 client (currently we only list the first 1000 results) #1904
Comments
Maybe cause of #1667 |
Is this also the cause of this error in the backup UI? |
@timmy59100 Yes it's possible. This command needs to go to each level of the backup directory for iteration, so if there are lots of volumes, it's possible didn't get the response in time. Another potential issue is the S3 vendor might respond late. Which S3 vendor you're using? |
@timmy59100 Not currently. Can you file another issue regarding the timeout? Also, how many volumes you have? We want to see what we can do to address it in the v1.1.0 release. |
Pre-merged Checklist
|
Verified with the Longhorn Master - Validation - Pass Validated with S3 backupstore, more than 1000 backups are getting listed. Also, verified with NFS backupstore, there is no impact. The timeout issue during backup is being tracked and will be reproduced and validated as a part of #2218 |
We need to add pagination support to the S3 client, currently we only list the first 1000 results.
We use the list calls for checking the backups/volumes/blocks currently we get lucky that we
use partitions for volumes and blocks.
But even so it is still possible to have more then 1000 backup.cfs as well as blocks/volumes in the same partition.
Which would end up not showing up on the longhorn-ui or any logic that relies on this content.
To Reproduce
snapshotCreate
andsnapshotBackup
APIs).snapshotBackup
API call, right clickCopy
->Copy as cURL
.The text was updated successfully, but these errors were encountered: