Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3 sync cannot set custom header (static website) #818

Closed
martintreurnicht opened this issue Jun 18, 2014 · 10 comments
Closed

s3 sync cannot set custom header (static website) #818

martintreurnicht opened this issue Jun 18, 2014 · 10 comments
Labels
feature-request A feature should be added or improved. guidance Question that needs advice or information. s3 wontfix We have determined that we will not resolve the issue.

Comments

@martintreurnicht
Copy link

Need to be able to set custom headers like with s3cmd's --add-header

use case

We currently need the ability to set HSTS headers for our html files

@konklone
Copy link

Really, the AWS CLI can't set HSTS header in S3 right now? That's sort of a dealbreaker.

@michaeltandy
Copy link

There is some discussion on the AWS forums - Unfortunately S3 itself only supports a limited set of headers, and other headers have to be prefixed with x-amz-meta. HSTS isn't on the approved list of headers.

Presumably they're worried visiting https://s3.amazonaws.com/some-bucket/ could set an IncludeSubdomains HSTS header, then a visit to http://other.bucket.s3.amazonaws.com/ would be directed to https://other.bucket.s3.amazonaws.com/ which would fail as it isn't covered by the wildcard certificate.

If you're using CloudFlare to add HTTPS to a static website hosted on S3, they've mentioned plans to add the HSTS header themselves so that might be an option of S3 doesn't get around to sorting this out :)

@jamesls
Copy link
Member

jamesls commented Jan 14, 2015

If I'm understanding the issue correctly, this would require a fix to the S3 API to allow for this header to be specified. This means there's nothing we can on the AWS CLI side until S3 itself supports this. I'm going to close this issue out for now. Once S3 supports this we can revisit this again.

@richp10
Copy link

richp10 commented Apr 20, 2017

The S3 API DOES support adding custom headers - per the original report s3cmd does this using the s3 api....

My use case is using s3 behind cloudfront and it is easiest to set all my headers in S3, including Public-Key-Pins and x-frame-options etc.

eg.

s3cmd put --recursive \
--add-header="Cache-Control:max-age=86400" \
--add-header="Vary:Accept-Encoding"  \
--add-header="X-Content-Type-Options:nosniff" \
--add-header="X-Permitted-Cross-Domain-Policies:master-only" \
--add-header="X-XSS-Protection: 1; mode=block" \
etc

I am explaining this as a +1 to allow the cli to set any header you want.. it is a real need. I can do this with s3cmd but not aws cli - so need to use both tools.

@tiagomrp
Copy link

tiagomrp commented May 2, 2017

@richp10 I've tried to set my s3 bucket with an x-frame-options header and still i cannot seem to see it when retrieving response headers or my clickjacking PoC, so that I am still able to use an iframe to show my s3 content (ex: static index.html). this is what i've executed so far :

s3cmd --acl-public s3://xxxx-bucket/index.html --add-header="X-Frame-Options:sameorigin"
s3cmd put index.html --recursive --add-header="X-Frame-Options:sameorigin; deny" s3://xxxx-bucket
s3cmd --recursive modify --add-header="X-Frame-Options:sameorigin; deny" s3://xxxxxx-bucket

this is what i get from a curl -s -v

  • Connected to s3-eu-west-1.amazonaws.com (x-x-x-x-x) port 443 (#0)
  • TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • Server certificate: *.s3-eu-west-1.amazonaws.com
  • Server certificate: DigiCert Baltimore CA-2 G2
  • Server certificate: Baltimore CyberTrust Root

GET /xxxxxx-bucket/index.html HTTP/1.1
Host: s3-eu-west-1.amazonaws.com
User-Agent: curl/7.43.0
Accept: /

< HTTP/1.1 200 OK
< x-amz-id-2: /oYmgEJZPL5g2J07hZEGwHWdTWHBpitrz0LepFLmq9FUN/yLWTlCs6PMF0GR7LyKu9giP1tbKxU=
< x-amz-request-id: DD4F649DFD4BE6D4
< Date: Tue, 02 May 2017 11:25:53 GMT
< Last-Modified: Tue, 02 May 2017 11:14:28 GMT
< ETag: "b6ab50be0773c9c123f6ce9a69f2e0dd"
< x-amz-meta-s3cmd-attrs: uid:503/gname:staff/uname:xxxxx/gid:20/mode:33188/mtime:1493217894/atime:1493723542/md5:b6ab50be0773c9c123f6ce9a69f2e0dd/ctime:1493217894
< Accept-Ranges: bytes
< Content-Type: text/html
< Content-Length: 892
< Server: AmazonS3

any ideas ?

thank you

@richp10
Copy link

richp10 commented May 2, 2017

Strewth - I'm really sorry - my previous post was in error.

s3cmd does not show any error when you set headers other than cache-control - but the headers are not actually changed on S3. This functionality is not supported by S3.

I forgot to come back and comment on my post - but have moved on to look at how I can use Cloudfront to add the correct headers. My own use-case needs lots of flexibility so I am planning on using the new Lamda@edge gizmo to add precisely the headers I need.

There is some native support for cors headers using cloudfront and s3 - eg. see http://blog.celingest.com/en/2014/10/02/tutorial-using-cors-with-cloudfront-and-s3/

Unless you front the S3 with Cloudfront I don't think there is anyway of achieving what you need.

Sorry again my previous post misled you..

@tiagomrp
Copy link

tiagomrp commented May 2, 2017

No worries, thank you for the heads up!

@tiagomrp
Copy link

tiagomrp commented May 2, 2017

@richp10 Do you have a way to prevent clickjacking using aws WAF and/or cloudfront ?

Would really appreciate any hindsight on this if possible.

Thank You
Tiago

@richp10
Copy link

richp10 commented May 3, 2017

I am personally hoping to use Lamda@edge to have complete control over security headers - but there might be a way of doing what you without that complexity.

Make sure you understant how CORS works - then read this: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html - which explains how to get S3 to return CORS headers. I think you need to set the 'allowed origin' header to the domain (or wildcarded domain) that you want to be able to load js from this bucket.

Then, in Cloudfront, you must whitelist the following request headers so S3 knows what to do..
Access-Control-Request-Headers
Access-Control-Request-Method
Origin

I have not tested this so don't even take my word that this is possible / will work - but I think it will and it is certainly worth exploring (this is what I plan to explore if I can't get the lamda approach working..)

@tiagomrp
Copy link

tiagomrp commented May 3, 2017

I have tried to change the distribution behaviour so that would whitelist those 3 headers. Still it didn't work. I spoke with someone from their Help Center and an engineer from S3 said that the only viable option to ensure HSTS would be to sue a function in Lambda@Edge as we talked.
At the moment, I'm just waiting to be accepted and put into the whitelist so that i can start using Lambda@Edge in preview mode.

Thank you for the help anyway.

@diehlaws diehlaws added guidance Question that needs advice or information. wontfix We have determined that we will not resolve the issue. and removed wontfix labels Jan 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request A feature should be added or improved. guidance Question that needs advice or information. s3 wontfix We have determined that we will not resolve the issue.
Projects
None yet
Development

No branches or pull requests

7 participants