fix(storage): rate limit bucket deletion in cleanup#5612
Conversation
There was a problem hiding this comment.
Code Review
This pull request modifies the cleanup_stale_buckets function in the storage examples to serialize bucket deletions. The implementation replaces concurrent task spawning with a sequential loop that introduces a 2-second delay between deletions to comply with GCP rate limits. Feedback was provided to replace println! calls with the tracing crate to align with the repository's structured logging standards.
9c33688 to
1294c4a
Compare
|
/gcbrun |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #5612 +/- ##
=======================================
Coverage 97.92% 97.92%
=======================================
Files 221 221
Lines 52759 52759
=======================================
+ Hits 51662 51663 +1
+ Misses 1097 1096 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
ping @coryan |
|
/gcbrun |
|
Formatting is failing with: Remember to always use |
Good catch — ran the formatter from the repo root with cargo fmt -p storage-samples so the workspace rustfmt config applies; it collapsed the google_cloud_gax import list to match the diff you pasted. |
|
/gcbrun |
Serialize bucket deletion to respect GCP API rate limit (~1 request per 2 seconds). Uses structured logging with tracing crate for consistency with repository standards.
…backoff Replace fixed inter-delete sleeps with exponential backoff on retryable DeleteBucket errors (initial delay >= 2s). Empty stale buckets in parallel, then delete buckets sequentially. Add comments explaining multi-worker rate limits.
- Replace manual delete retry loop with with_backoff_policy + idempotency - Restore GC label comment in empty_bucket_contents - Group cleanup_bucket with other cleanup helpers; ASCII doc comment
21dc199 to
15dadf5
Compare
|
/gcbrun |
|
This is looking good, I need a review from somebody in the @googleapis/gcs-team |
Fixes #5219
Problem
When there are many stale buckets in integration tests, the cleanup process
was deleting them in parallel and exceeding GCP's Storage API rate limit
(approximately one request every two seconds).
Solution
Serialize bucket deletion by removing parallel spawning (tokio::spawn and
join_all) and instead delete buckets sequentially with a 2-second delay
between each deletion to respect the API rate limit.
Changes
cleanup_stale_buckets()to delete buckets sequentiallyTesting
This change should prevent rate limit errors during stale bucket cleanup
in integration tests.