Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3: Metadata too large #120

Closed
LeonPatmore opened this issue Aug 10, 2020 · 3 comments · Fixed by #124
Closed

S3: Metadata too large #120

LeonPatmore opened this issue Aug 10, 2020 · 3 comments · Fixed by #124
Milestone

Comments

@LeonPatmore
Copy link

We are having an issue while pushing a chart:

upload chart to s3: upload object to s3: MetadataTooLarge: Your metadata headers exceed the maximum allowed metadata size

Our chart is reasonably large, with around 20 dependencies. Is there a possible fix or workaround that can be used? Thanks!

@zandernelson
Copy link

Yes, this is a big problem. We also have a lot of dependencies. S3 has a maximum of 2KB for metadata and for some reason this plugin stores a bunch of information in the metadata.

@LeonPatmore one workaround is to copy your helm packages into s3 using AWS CLI commands or boto3 and execute the helm s3 reindex command for the repo. This seems to work.

This plugin won't scale for any large helm deployments unless this issue is fixed.

How can we get around this?

@hypnoglow
Copy link
Owner

Chart metadata is stored in the S3 object metadata so that operations like re-indexing do not require downloading all charts (GET requests), only looking on the charts metadata (HEAD requests), resulting in fewer data downloaded.

I see that it can be a problem for large charts. We could add an option to disable S3 object metadata so that operations will use GET requests and download charts to get their metadata.

@hypnoglow
Copy link
Owner

The plugin version 0.10.0 is released.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants