Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 presigned urls do not sign headers x-amz-server-side-encryption-aws-kms-key-id, x-amz-server-side-encryption #874

Closed
sbull opened this issue Jul 15, 2015 · 26 comments
Labels
closing-soon This issue will automatically close in 4 days unless further comments are made. feature-request A feature should be added or improved. guidance Question that needs advice or information.

Comments

@sbull
Copy link

sbull commented Jul 15, 2015

There's a regression of presigned urls that use KMS encryption. I've upgraded from 2.0.39 to 2.1.7, and 2.1.7 is broken.

In 2.0.39, when I call

Aws::S3::Object#presigned_url(:put, { server_side_encryption: 'aws:kms', ssekms_key_id: MY_KEY_ID })

I get a url with
...&X-Amz-SignedHeaders=host%3Bx-amz-server-side-encryption%3Bx-amz-server-side-encryption-aws-kms-key-id&...

But when I use 2.1.7, those headers are missing:
...&X-Amz-SignedHeaders=host&...

and when I apply the headers to my PUT request on that url (as required: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html#put-object-sse-specific-request-headers), I get:

<Error>
  <Code>AccessDenied</Code>
  <Message>There were headers present in the request which were not signed</Message>
  <HeadersNotSigned>x-amz-server-side-encryption-aws-kms-key-id, x-amz-server-side-encryption</HeadersNotSigned>
  ...
</Error>

Rolling back to 2.0.39 fixes my problem.

@sbull
Copy link
Author

sbull commented Jul 15, 2015

I believe this regression happened here: 62766c9 (apparently to fix #816). But there appears to be some back-and-forth with the headers in general (4ee9c95, 72535aa, 2eae71f, #543).

@trevorrowe
Copy link
Member

Just a heads up. I'm looking into this. I believe the solution is going to invoke tracking down all of the known x-amz-* headers and determining which ones can, must, and must not be hoisted to the URL and which must be signed and sent as headers.

@sbull
Copy link
Author

sbull commented Jul 23, 2015

Curious... is there a reference implementation for the AWS SDKs? Perhaps the Java one? Looks like it's handled here (including SSE headers, ...): https://github.com/aws/aws-sdk-java/blob/948bc1a418d357abf8fd46cc19123ced2e014ce0/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/AmazonS3Client.java#L2563

@trevorrowe
Copy link
Member

So this change was made so that x-amz- headers would be sent via the querystring eliminating the need for users to provide these values twice. Before the change you would have to specify these values to the presigner and then again to your HTTP client to send them as headers. This was necessary because signature version 4 requires all header values that start with x-amz- to be signed, including their expected values.

When the change was made, it was intended to be a bug-fix so that a user could provide the values once to the presigner and have the URL querystring contain their values. This means the presigned URL no longer signs them as headers as they are part of the request URI.

Here is an example showing how to create a presigned PUT url with server-side-encryption and KMS and then upload something with Net::HTTP:

require 'uri'
require 'logger'

obj = Aws::S3::Object.new('aws-sdk', 'secret')

uri = URI(obj.presigned_url(:put, {
  server_side_encryption: 'aws:kms',
  ssekms_key_id: "cb467d40-59be-44d7-813f-d0f281092da8",
}))

http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.set_debug_output(Logger.new($stdout))

req = Net::HTTP::Put.new(uri.request_uri)
req.body = "secret-body-#{Time.now.to_i}"
res = http.request(req)

obj.ssekms_key_id # show that it is encrypted server-side

This means you should be able to simply remove those headers from your HTTP request and everything should work. Clearly this was an unintentional side-effect of the bug-fix, and a breaking change. The tricky part is if this is reverted then newer users would be broken (those counting on the values to be part of the URI and not currently sending them as headers). This is a sort of catch 22 and I'm not sure what the best path forward.

Thoughts?

@sbull
Copy link
Author

sbull commented Jul 23, 2015

When I had tried just removing the headers originally, I was getting a generic authorization error message. I have a bucket policy set up to require that files are encrypted:

        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::.../*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "aws:kms"
                }
            }
        }

I'm guessing that when your object is PUT in your example above, it's not actually being encrypted. But I'll have to try again.

Policy reference: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

@trevorrowe
Copy link
Member

The final line of my example was there to demonstrate the object was encrypted:

obj.ssekms_key_id # show that it is encrypted server-side

That method returns a nil value for non kms server-side encrypted objects. After uploading the object via the pre-signed PUT this returns the ARN for the kms key I used.

@sbull
Copy link
Author

sbull commented Jul 24, 2015

Here's what I get using mostly your example code (different bucket, object key, credentials):

opening connection to <bucket>.s3.amazonaws.com:443...
opened
starting SSL for <bucket>.s3.amazonaws.com:443...
SSL established
<- "PUT /123/456/secret?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Date=20150724T094043Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&x-amz-server-side-encryption=aws%3Akms&x-amz-server-side-encryption-aws-kms-key-id=...&X-Amz-Signature=c146b9e31433793f7ce1c498e4cf34f7c9e99b361e19bb2637af8e0307a76c78 HTTP/1.1\r\nAccept-Encoding: gzip;q=1.0,deflate;q=0.6,identity;q=0.3\r\nAccept: */*\r\nUser-Agent: Ruby\r\nConnection: close\r\nHost: <bucket>.s3.amazonaws.com\r\nContent-Length: 22\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\n"
<- "secret-body-1437731099"
-> "HTTP/1.1 403 Forbidden\r\n"
-> "x-amz-request-id: E6C878ADCC13BDBF\r\n"
-> "x-amz-id-2: 3aXhZKqZnNRdh+3nE/DPIlscAn5VDcx7Li1CIvnbL5R6yyyKYYwkbXCSyTzauec3\r\n"
-> "Content-Type: application/xml\r\n"
-> "Transfer-Encoding: chunked\r\n"
-> "Date: Fri, 24 Jul 2015 09:44:54 GMT\r\n"
-> "Connection: close\r\n"
-> "Server: AmazonS3\r\n"
-> "\r\n"
-> "e7\r\n"
reading 231 bytes...
-> "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>E6C878ADCC13BDBF</RequestId><HostId>3aXhZKqZnNRdh+3nE/DPIlscAn5VDcx7Li1CIvnbL5R6yyyKYYwkbXCSyTzauec3</HostId></Error>"
read 231 bytes
reading 2 bytes...
-> "\r\n"
read 2 bytes
-> "0\r\n"
-> "\r\n"
Conn close
=> #<Net::HTTPForbidden 403 Forbidden readbody=true>

If I remove the bucket policy restriction described above, it works:

-> "HTTP/1.1 200 OK\r\n"
-> "x-amz-id-2: y8q3rybqs5+GmhKtOXB2CMLZ3m3AHONdjCEHETXfUr5sJ0qdxYNVopmRCHwo5bFe\r\n"
-> "x-amz-request-id: 1D8720EAF0710E9F\r\n"
-> "Date: Fri, 24 Jul 2015 09:55:57 GMT\r\n"
-> "x-amz-server-side-encryption: aws:kms\r\n"
-> "x-amz-server-side-encryption-aws-kms-key-id: ...\r\n"
-> "ETag: \"359201a47ab0c889dc9126b7ebc282b1\"\r\n"
-> "Content-Length: 0\r\n"
-> "Server: AmazonS3\r\n"
-> "Connection: close\r\n"
-> "\r\n"
reading 0 bytes...
-> ""
read 0 bytes
Conn close
=> #<Net::HTTPOK 200 OK readbody=true>

I need to have a bucket policy that restricts to aws:kms encrypted file uploads. So, is there a way to write a bucket policy that checks the URL parameters? If not, I need to use the headers.

@trevorrowe
Copy link
Member

Hmm.. Given Amazon S3 accepts the upload with the server side encryption header as a querystring, it seems like that should also be checked when matching the policy. I'll try following up on this and see what the expected behavior is.

@trevorrowe
Copy link
Member

I spoke with an engineer on the S3 team. They are going to update their API reference documentation to reflect this limitation. As a work-around, I suppose we will want to add a #presigned_request that generates the presigned URL without hoisting any of the request headers. It could return a HTTP URI and a headers hash. This would allow you to make the request and know what the expected headers are.

Would this resolve the issue?

@sbull
Copy link
Author

sbull commented Sep 1, 2015

It sounds like that would work, if I understand correctly. I'd get a URL that looks like it did previously (...&X-Amz-SignedHeaders=host%3Bx-amz-server-side-encryption%3Bx-amz-server-side-encryption-aws-kms-key-id&...), and a hash containing { 'x-amz-server-side-encryption' => 'aws:kms', 'x-amz-server-side-encryption-aws-kms-key-id' => ... }, correct? If so, then that seems like a fine workaround.

It would be great if this could be mentioned in the CHANGELOG too.

awood45 added a commit that referenced this issue Oct 22, 2015
@tyrauber
Copy link

This issue appears to have resurfaced in 2.3.4.

s3 = Aws::S3::Resource.new(region:ENV['S3_REGION'])    
obj = s3.bucket(ENV['S3_BUCKET_NAME']).object("#{Time.now.strftime("%Y/%m/%d/%H")}#{SecureRandom.hex}")
url = URI.parse(obj.presigned_url(:put, expires_in: 36000, acl: 'public-read'))

Results in following request URL:

https://bucketname.s3-us-west-2.amazonaws.com/2016/05/13/110d88197423e3bd5d868528cdf89e2ac5?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AWSKEY/20160513/us-west-2/s3/aws4_request&X-Amz-Date=20160513T173731Z&X-Amz-Expires=36000&X-Amz-SignedHeaders=host&x-amz-acl=public-read&X-Amz-Signature=cb47754a8d22d35c9f696a552772e70793dfd3a0f44c0643cd3d530b2a8f4e1e

Which results in the error:

<Error>
<Code>AccessDenied</Code>
<Message>There were headers present in the request which were not signed</Message>
<HeadersNotSigned>x-amz-acl</HeadersNotSigned>
<RequestId>F81F5AE6AE7F66C8</RequestId>
<HostId>eR+lfCsQFb/vGUtc2A6bQI263Q2L9Jip5VU9FmhLWEg73+l0kvrxLfroPa0MfmUUvUI54O5XddE=</HostId>
</Error>

Downgrading to 2.0.39 solves the issue.

I noticed in 2.0.39, the url has a semicolon after the X-Amz-SignedHeaders? Is the lack of the semicolon what is causing the error in 2.3.4?

&X-Amz-SignedHeaders=host;x-amz-acl&X-Amz-Signature=00f4b33d45d314f63ed7e3a58c4a07bcd847fdd474d2f91f073a200bcd047c93

@cjyclaire
Copy link
Contributor

Soft ping here. PR #1477 just opened addressing this feature request : )
Feel free to chime in and add comments : )

@mcfiredrill
Copy link

I had the same issue as @tyrauber .

I'm trying to put to a bucket with a presigned url and set the acl to public-read.

    client = Aws::S3::Client.new(                                                                                                                  
      :region => 'us-east-1',                                                                                                                      
      :access_key_id => ENV['S3_KEY'],                                                                                                             
      :secret_access_key => ENV['S3_SECRET'],                                                                                                      
    )                                                                                                                                              
    signer = Aws::S3::Presigner.new client: client                                                                                                 
    url = signer.presigned_url(:put_object, bucket: ENV['S3_BUCKET'],                                                                              
                                            key: "mykey",                            
                                            expires_in: 10.hours.to_i,                                                                             
                                            acl: "public-read")                                                                                    

I'm sending these headers on the client:

          ajaxSettings: {                                                                                                                          
            headers: {                                                                                                                             
              'Content-Type': 'audio/mpeg',                                                                                                        
              'x-amz-acl': 'public-read'                                                                                                           
            }                                                                                                                                      

The file starts to upload, but I get a 403 halfway through:

<Error><Code>AccessDenied</Code><Message>There were headers present in the request which were not signed</Message><HeadersNotSigned>x-amz-acl

Yet it looks like the header is present in the url:

Request URL:https://streampusherdev.s3.amazonaws.com/datafruits/02Hiccups.mp3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJSOIWA7BITEZINJA%2F20170504%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20170504T032947Z&X-Amz-Expires=36000&X-Amz-SignedHeaders=host&x-amz-acl=public-read&X-Amz-Signature=7127b81a6b9fa7449205fb1b62d43439b1ccf3779054eaed9aa76734be8a681a
Request Method:PUT
Status Code:403 Forbidden

I tried removing the headers from my http request on the client side, but it doesn't seem to make a difference.

I downgraded to 2.0.39 and things were working well (I have to specify the headers on the client side http request). The request url looks a little different:

Request URL:https://streampusherdev.s3.amazonaws.com/datafruits/02Hiccups.mp3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJSOIWA7BITEZINJA%2F20170504%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20170504T034428Z&X-Amz-Expires=36000&X-Amz-SignedHeaders=host%3Bx-amz-acl&X-Amz-Signature=00e0f0f4f7236b361dc2ae927327b108f16c5040dbdbf86c1420194616cd966a

x-amz-acl is still present in the X-Amz-SignedHeaders, but the value for it is not.

@cjyclaire
Copy link
Contributor

cjyclaire commented May 4, 2017

@mcfiredrill Sorry to hear that you have to work around the problem by downgrading. So this is a known limitation from S3 side, we are actively working on a feature that allows flexible customizable presigned behavior in the long run. Meanwhile there is aws-sigv4 available for customized signing and presigned requests.

So here are some work-around with aws-sigv4

require `aws-sigv4`

# a s3 client at region 'us-west-2'
signer = Aws::Sigv4::Signer.new( 
  service: 's3', 
  region: 'us-west-2', 
  credentials_provider: client.config.credentials, 
  uri_escape_path: false, 
)

# create presigned url for an object with bucket 'a-fancy-bucket' and key 'hello_world' 
url = signer.presign_url( 
  http_method: 'PUT', 
  url:'https://a-fancy-bucket.s3-us-west-2.amazonaws.com/hello_world', 
  headers: {                                                                                                                             
     "Content-Type" => "audio/mpeg",                                                                                                        
     "x-amz-acl" => "public-read"                                                                                       
  } 
  body_digest: 'UNSIGNED-PAYLOAD', 
) 

# making request
body = ...
Net::HTTP.start(url.host) do |http|
  http.send_request("PUT", url.request_uri, body, { 
    "content-type" => "audio/mpeg", 
    "x-amz-acl" => "public-read", 
  })
end
# => #<Net::HTTPOK 200 OK readbody=true>

@mcfiredrill
Copy link

@cjyclaire Thanks I may give that Gem a shot 👍

@ajw725
Copy link

ajw725 commented Jun 19, 2019

i'm still having problems with this in aws-sdk-core 2.10.47 / aws-sigv4 1.0.2.

i have a policy on my bucket that requires the x-amz-server-side-encryption parameter (i understand i can also just turn on encryption in the bucket settings, but that's beside the point). i'm trying to create a presigned url for a PUT request (uploading a PDF file) like:

my_bucket.object(new_key).presigned_url(:put, {expires_in: 60, content_type: 'application/pdf', server_side_encryption: 'AES256'})

and then upload with an ajax request like this, using a file object from a browser form:

$.ajax({url: presignedUrl, type: 'PUT', contentType: file.type, processData: false, data: file})

but i get a 403 when i try to execute the request. i see this in Aws::Signers::V4#presigned_url:

request.headers.keys.each do |key|
  if key.match(/^x-amz/i)
    params.set(key, request.headers.delete(key))
  end
end

so any param/header matching "x-amz-*" gets moved from the actual headers to the request params, and thus 1) isn't present in the actual headers, and 2) isn't included in the signed headers. this works fine if i remove that bucket policy, but with the policy in place, my understanding is that the server side encryption header must be included as an actual header and in the signed headers param. if i monkey patch the gem so it leaves the encryption header alone (i.e. comment out the snippet i pasted above) and add the header to my ajax request like this:

$.ajax({url: presignedUrl, type: 'PUT', contentType: file.type, processData: false, data: file, beforeSend: function(xhr) {xhr.setRequestHeader('x-amz-server-side-encryption', 'AES256')}})

then it uploads successfully. so it seems like the code moving the headers to params is a bit overzealous and needs to have some exceptions for certain parameters.

edit: i see this issue and the comments about aws-sigv4, but i thought that had just been absorbed into the main aws-sdk-ruby gem now, so i wasn't sure if this was still expected.

edit 2: using Aws::Sigv4::Signer directly as @cjyclaire described above does seem to work. this whole thing was pretty frustrating/confusing to figure out, though.

@mullermp
Copy link
Contributor

Reopening - deprecating usage of Feature Requests backlog markdown file.

@mullermp mullermp reopened this Oct 21, 2019
@mullermp mullermp added feature-request A feature should be added or improved. needs-major-version Can only be considered for the next major release labels Oct 21, 2019
@chase439
Copy link

chase439 commented Dec 10, 2019

This issue has been here for so long, when can we get a real fix?

Also, can someone offer a solution on how to fix the usage of presigned_url with server_side_encryption in the header for this: https://github.com/rails/rails/blob/master/activestorage/lib/active_storage/service/s3_service.rb#L82 (url_for_direct_upload and/or headers_for_direct_upload methods)

I'm not sure how to use aws-sigv4 in this file.

@mullermp
Copy link
Contributor

mullermp commented Dec 10, 2019

As far as I know, a real fix requires a major version bump as the fix would be breaking. Happy to take any pull requests if that's not the case.

@chase439
Copy link

chase439 commented Dec 10, 2019

How about a solution where it doesn't delete the x-amz-* headers?

It can still hoist them to querystring, and not call request.headers.delete as seen at Aws::Signers::V4#presigned_url Then it will work for all cases?

I finally got ActiveStorage to work with this solution.

I monkeypatched Aws::Signers::V4#presigned_url from
value = Aws::Sigv4::Signer.uri_escape(http_req.headers.delete(key))
to
value = Aws::Sigv4::Signer.uri_escape(http_req.headers[key])

To summarize: Even though querystring works, headers are still needed for Bucket Policy.
As you can see, here's the official list of available condition keys that can be used in a bucket policy:
https://docs.aws.amazon.com/AmazonS3/latest/dev/amazon-s3-policy-keys.html#AvailableKeys-iamV2 Querystring, as far as I know, are not supported in bucket policy! x-amz-server-side-encryption, x-amz-server-side-encryption-aws-kms-key-id, x-amz-acl, and many others are still needed in the headers in order to do any policy on them.


For those wanting to know the custom working S3 service I wrote for ActiveStorage:

    def url_for_direct_upload(key, expires_in:, content_type:, content_length:, checksum:)
      instrument :url, key: key do |payload|
        generated_url = object_for(key).presigned_url :put, expires_in: expires_in.to_i,
          content_type: content_type, content_length: content_length, content_md5: checksum,
          server_side_encryption: 'aws:kms',
          ssekms_key_id: @options[:kms_key_id]

        payload[:url] = generated_url

        generated_url
      end
    end


    def headers_for_direct_upload(key, content_type:, checksum:, filename: nil, disposition: nil, **)
      content_disposition = content_disposition_with(type: disposition, filename: filename) if filename

      {
        "Content-Type" => content_type,
        "Content-MD5" => checksum,
        "Content-Disposition" => content_disposition,
        "x-amz-server-side-encryption" => 'aws:kms',
        "x-amz-server-side-encryption-aws-kms-key-id" => @options[:kms_key_id]
      }
    end

You'll also have to add those headers to S3 bucket's CORS configuration:

<AllowedHeader>x-amz-server-side-encryption-aws-kms-key-id</AllowedHeader>
<AllowedHeader>x-amz-server-side-encryption</AllowedHeader>

@mullermp
Copy link
Contributor

Does the presigned_request use case work for you? I added it somewhat recently:

https://github.com/aws/aws-sdk-ruby/blob/master/gems/aws-sdk-s3/lib/aws-sdk-s3/presigner.rb#L124

It does not hoist headers: https://github.com/aws/aws-sdk-ruby/blob/master/gems/aws-sdk-s3/lib/aws-sdk-s3/presigner.rb#L210

And the method returns both the url and headers to send alongside it.

This is in line with Trevor's recommendation (comment on the same thread) - #874 (comment) but was never implemented until recently.

@mullermp mullermp added closing-soon This issue will automatically close in 4 days unless further comments are made. and removed needs-major-version Can only be considered for the next major release labels May 27, 2020
@chase439
Copy link

Maybe, it looks promising. I just can't test it for awhile.

@mullermp
Copy link
Contributor

I'm pretty sure it solves this issue. If not, let me know and I'll re-open.

@chase439
Copy link

I was able to test it today. The new presigned_url method, which uses _presigned_request , seems to work good! I no longer need the monkey patch to hoist the headers.

@mullermp
Copy link
Contributor

Excellent @chase439 ! And when you said a "while" you really meant it. Thanks for getting back to me.

@chase439
Copy link

Just to follow up, I had to use presigned_request instead of presigned_url as it sets hoist to false, which is needed to get those x-amz attributes in the header. Look at definition of presigned_request vs presigned_url for those who are curious.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closing-soon This issue will automatically close in 4 days unless further comments are made. feature-request A feature should be added or improved. guidance Question that needs advice or information.
Projects
None yet
Development

No branches or pull requests

9 participants