New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 Bucket Deployments with content encoding based on file extension. #7090
Comments
Hi @hleb-albau This is definitely an interesting use-case. The problem is that bucket deployments run:
The command does not allow specifying different conte-types for different files. Splitting to different source directories won't work either because of the (necessary) Are you having issues with anything other than One possible solution would be to have the bucket deployment lambda add a specific entry for This seems like the most pragmatic solution for now. WDYT? |
Thanks for response! Beside content-encoding header, which determines content compression(br, gz and etc) we also have content-type header. Example for file Right now our deployments process runs as follows:
So, I wonder if cli can detect both headers properly.. |
Hi @hleb-albau - Yeah, seems like there's no way around this. We can probably support this use-case by doing what you did with Stay tuned 👍 Thanks! |
relates also to #4687 |
Just to add to this conversation, here is the script I am using to achieve this right now: # Clear out / upload everything first
echo "[Phase 1] Sync everything"
aws s3 sync . "s3://${s3_bucket_name}" --acl 'public-read' --delete
# Brotli-compressed files
# - general (upload everything brotli-compressed as "binary/octet-stream" by default)
echo "[Phase 2] Brotli-compressed files"
aws s3 cp . "s3://${s3_bucket_name}" \
--exclude="*" --include="*.br" \
--acl 'public-read' \
--content-encoding br \
--content-type="binary/octet-stream" \
--metadata-directive REPLACE --recursive;
# - javascript (ensure javascript has correct content-type)
echo "[Phase 3] Brotli-compressed JavaScript"
aws s3 cp . "s3://${s3_bucket_name}" \
--exclude="*" --include="*.js.br" \
--acl 'public-read' \
--content-encoding br \
--content-type="application/javascript" \
--metadata-directive REPLACE --recursive; |
would also be good if the upload detected the file encoding and added a |
I solved it with a Custom Resource. This one changes the
|
Currently, there is contentEncoding?:string option in BucketDeployment construct (System-defined content-encoding metadata to be set on all objects in the deployment). It would be nice to have possibility to specify contentEncoding according to mapping for file extension. Example: for files with extension .br specify "Content-Encoding: br", for .gzip files - "Content-Encoding: gzip" and so on.
Use Case
We use s3+ cloudfront pair to serve static website. To provide better performance br files are server based on Accept-Encoding header, so our files have to copies (ex: index.html and index.html.br*). Currently, we have to use aws cli to deploy differently encoded files with right headers. If BucketDeployment construct will support contentEncoding encoding by file extension option, it would be more easy-to-go static hosting option.
This is a 🚀 Feature Request
The text was updated successfully, but these errors were encountered: