New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: adding maximum_chunk_size_gb storage option (PROJQUAY-2679) #2186
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #2186 +/- ##
==========================================
- Coverage 70.23% 70.23% -0.01%
==========================================
Files 432 432
Lines 39305 39304 -1
Branches 5088 5088
==========================================
- Hits 27606 27605 -1
- Misses 10112 10114 +2
+ Partials 1587 1585 -2
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
bc2a543
to
9215251
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. We have to expose somewhere in the docs that maximum_chunk_size_gb
is something that can be passed to the S3 storage provider
Yup that's right, i'll bring in @stevsmit |
/cherry-pick redhat-3.9 |
@bcaton85: new pull request created: #2191 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…uay#2186) Adds the `maximum_chunk_size_gb` option to s3 storage to reduce chunk size and increase performance. Also removes redundant storage copy call.
…uay#2186) Adds the `maximum_chunk_size_gb` option to s3 storage to reduce chunk size and increase performance. Also removes redundant storage copy call.
Users are currently using the
S3storage
option to use the IBM storage solution. The final copy of an image push from theuploads
to thesha256
storage directory currently fails for large image layers (10gb). This is due to the 10gb layer being chunked into 5gb ranges that causes delays in the IBM copy operation. This PR adds performance by removing an unnecessary copy and adds themaximum_chunk_size_gb
option to reduce the size of the copy.