New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] TupleGCSStoreBackend::get_all #9703
Conversation
✅ Deploy Preview for niobium-lead-7998 canceled.
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #9703 +/- ##
===========================================
- Coverage 82.56% 82.54% -0.03%
===========================================
Files 511 511
Lines 46450 46458 +8
===========================================
- Hits 38353 38347 -6
- Misses 8097 8111 +14
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given the current pattern, this looks good 👍 I would prefer to see the google client injected as a dependency to the store backend so we could just pass in a mock and assert against it, but definitely out of scope for this PR.
@joshua-stauffer yeah, totally agree. Unfortunately, I think we're going to be in the same place with azure blob stores |
Implements
TupleGCSStoreBackend::get_all
. Note that we could improve the performance if/when we bump google-cloud-storage to 2.12, using thetransfer_manager
.The test coverage here uses fakes to mimic what GCS does, inferred from the preexisting code, but I also did a bit of manual testing. The steps if you want to repro:
invoke lint
(usesruff format
+ruff check
)For more information about contributing, see Contribute.
After you submit your PR, keep the page open and monitor the statuses of the various checks made by our continuous integration process at the bottom of the page. Please fix any issues that come up and reach out on Slack if you need help. Thanks for contributing!