Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: Fix big layer uploads for Ceph/RADOS driver (PROJQUAY-6586) #2601

Merged
merged 5 commits into from Jan 16, 2024

Conversation

ibazulic
Copy link
Member

Current uploads of large images usually fail on Ceph/RADOS compatible implementations (including Noobaa) because during the last assembly, copy is done all at once. For large layers, this takes a long while and Boto times out. With this patch, we limit the size of the used chunk to 32 MB so the final copy is done in parts of up to 32 MB each. The size can be overridden by specifying the parameter maximum_chunk_size_mb in the driver settings, for example:

DISTRIBUTED_STORAGE_CONFIG:
    default:
        - RadosGWStorage
        - ...
           maximum_chunk_size_mb: 100

Current uploads of large images usually fail on Ceph/RADOS compatible implementations (including Noobaa) because during the last assembly, copy is done all at once. For large layers, this takes a long while and Boto times out. With this patch, we limit the size of the used chunk to 32 MB so the final copy is done in parts of up to 32 MB each. The size can be overridden by specifying the parameter `maximum_chunk_size_mb` in the driver settings, for example:

~~~
DISTRIBUTED_STORAGE_CONFIG:
    default:
        - RadosGWStorage
        - ...
           maximum_chunk_size_mb: 100
~~~
Copy link

codecov bot commented Jan 13, 2024

Codecov Report

Attention: 12 lines in your changes are missing coverage. Please review.

Comparison is base (94735bc) 70.59% compared to head (acaee22) 70.67%.
Report is 8 commits behind head on master.

Files Patch % Lines
storage/cloud.py 0.00% 12 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2601      +/-   ##
==========================================
+ Coverage   70.59%   70.67%   +0.08%     
==========================================
  Files         434      435       +1     
  Lines       39856    40091     +235     
  Branches     5166     5212      +46     
==========================================
+ Hits        28135    28336     +201     
- Misses      10102    10105       +3     
- Partials     1619     1650      +31     
Flag Coverage Δ
unit 70.67% <0.00%> (+0.08%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ibazulic
Copy link
Member Author

ibazulic commented Jan 16, 2024

For backward compatibility, a new parameter has been added to the RadosGW config:

a) if server_side_assembly = True then we force the end layer to be pushed to the blob tree in chunks of at most maximum_chunk_size_mb (or at most 32 MB if the parameter is missing).
b) if server_side_assembly = False we force client side assembly of pushed layer. To support pushes of large layers, we also change Boto timeout to 600 seconds instead of 60.

Example configuration:

DISTRIBUTED_STORAGE_CONFIG:
    rados:
        - RadosGWStorage
        - access_key: ACCESS_KEY
          bucket_name: quay
          hostname: HOSTNAME
          is_secure: false
          port: "9000"
          secret_key: SECRET_KEY
          storage_path: /datastorage/registry
          maximum_chunk_size_mb: 100
          server_side_assembly: true

If server_side_assembly is missing, we presume it to be true.

storage/cloud.py Outdated Show resolved Hide resolved
storage/cloud.py Outdated Show resolved Hide resolved
Copy link
Contributor

@bcaton85 bcaton85 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ibazulic ibazulic merged commit e243d23 into quay:master Jan 16, 2024
14 of 15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants