Support AWS Signature Version 4 for S3 uploads #1336

Closed
bustbr opened this Issue Dec 15, 2014 · 99 comments

Projects

None yet

8 participants

@bustbr
bustbr commented Dec 15, 2014

I created a new S3 bucket and tried uploading a file (using Fineuploader 5.0.8) when I was confronted with this error message: "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256."

I found out that there are currently two ways of creating a signature for an AWS request, v2 and v4.
Fineuploader seems to be only supporting the old v2 signatures; this isn't supported by all AWS regions though (e.g. eu-central-1/"EU (Frankfurt)").
See http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html for more details.

As far as I can tell this goes beyond changing the server side code to sign the request; the request itself needs to include an Authorization header that includes the algorithm used for signing ("AWS4-HMAC-SHA256") and other information, and the string to be signed needs to include the algorithm and other information as well, not just the json encoded policy itself.

@rnicholus
Member

Interesting. It's not clear how this affects multipart encoded POST uploads that bypass the REST API. This workflow is critical to supporting IE9 and older. In that case, there is no Authorization header (since there is no way to specify headers when uploading files in IE9 and older).

Some things for us to do and investigate:

  • Ensure that v2 signatures are supported by default (for backwards compatibility) OR introduce a breaking change and require all signatures returned from the signature server are v4. The latter is obviously a potential annoyance to integrators and must wait for a major version release of Fine Uploader. If we go the non-breaking-change route, we'll need to add a version property to the signature option. For v4 signatures, we'll need more information from the server, such as the bucket region.
  • How do v4 signatures work for multipart encoded POST uploads that bypass the REST API? Is the format of the signature field value for signed MPE POST uploads identical to the format of the Authorization header for v4 signed REST uploads?

This change will need to include code adjustments, documentation updates, and server-side example updates.

We'll look into this for a future release.

@rnicholus rnicholus changed the title from Feature Request: Support AWS Signature Version 4 for S3 uploads to Support AWS Signature Version 4 for S3 uploads Dec 15, 2014
@rnicholus
Member

The more I think about this, the more I'm fairly sure that we may never be able to remove the ability to accept v2 signatures. Fine Uploader S3 is used against some S3-like endpoints. Also, if we are to support uploads to Google Cloud Storage via the S3 compatibility layer #1158, I assume v2 signatures must be used.

@rnicholus rnicholus added 2 - Do and removed 0 - Discuss labels Feb 4, 2015
@JaapKooiker

I'm wondering where we are on this issue. I had everything working but now I use Frankfurt and it's broken. I there a way to make a separate 'hotfix' just for Frankfurt (or whatever V4 region)? I really need this... (have a license)

@rnicholus
Member

We will support v4 signatures in a future release, most likely 5.3.

@jetheredge

I would also like to chime in and say that fine uploader cannot be used (in any region) to upload to S3 using keys from KMS until signature V4 is supported.

@rnicholus
Member

For those who don't know (including myself) please describe this "KMS".

On Wed, Mar 11, 2015 at 7:57 AM, Justin Etheredge
notifications@github.com wrote:

I would also like to chime in and say that fine uploader cannot be used (in any region) to upload to S3 using keys from KMS until signature V4 is supported.

Reply to this email directly or view it on GitHub:
#1336 (comment)

@jetheredge

Sorry, the Amazon Key Management Service. Basically a cloud hosted Hardware Security Module. Allows customers to do server side encryption with S3 but not have to use a shared key. Excellent for regulatory compliance. http://aws.amazon.com/kms/

@jetheredge

Here are some more details: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

I got as far as adding the correct headers into fine uploader but stopped once I ran up against the v4 authentication headers.

@rnicholus
Member

Supporting v4 signatures will be investigated as part of the 5.3 version of Fine Uploader. 5.2 is currently in development.

@jetheredge

Great, I saw that above, I just wanted to make you aware that there were other concerns than just usage within certain regions. Thanks!

@rnicholus
Member

Yes, thanks for the info. Please see my list of TODO items related to this case.

I'd like to introduce this as a non-breaking change. In fact, I think it is imperative that both v2 and v4 signatures are supported. As reflected in my TODO list, I'm not sure how v4 signatures work for non-REST uploads (MPE upload POST requests used in older browsers). If anyone happens to have more information about this, that will save me some research time.

@jetheredge

Amazon has a pretty good example here http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-post-example.html The signature is put in and an additional X-Amz-Algorithm parameter with the value 'AWS4-HMAC-SHA256' is added. Does that help?

@rnicholus
Member

Yes, it does, thanks. I wasn't able to locate that document after a brief search earlier.

@rnicholus rnicholus added this to the 5.3.0 milestone Mar 18, 2015
@rnicholus rnicholus added 3 - Doing and removed 2 - Do labels Apr 21, 2015
@rnicholus
Member

Starting work on this as part of 5.3 now.

TODO:

  • Ensure that v2 signatures are supported by default. Will need to add a version property to the signature option.
  • Support v4 signatures for REST uploads.
  • Support v4 signatures for non-REST uploads.
  • Support v4 signature for client-side signing workflow.
  • Update all server-side examples to include support for V4 signatures.
  • Update docs with all V4-related signature info.
  • Start working on #1225. This will involving moving any code-specific stuff out of the blog post and into the docs site on an S3 feature page. Any server-specific elements should exist on the S3 server docs page.

Unfortunately, we won't be able to send the expected signature version to the server in the signature request. This would require a breaking change for non-REST uploads, since the signature request message body, in that case, consists solely of the policy JSON.

@rnicholus rnicholus added a commit that referenced this issue Apr 22, 2015
@rnicholus rnicholus feat(s3): start of S3 v4 signature support
Add version config option.

#1336
a7ee94f
@rnicholus
Member

Looks like the hardest part of this, as expected, will be attempting to understand Amazon's convoluted new signing policy. It is, unfortunately, vastly different than the process required to generate V2 signatures. I'm currently attempting to generate a simple V4 signature based on the (disorganized) signature docs on the AWS site. No luck just yet. The example code in their docs is riddled with syntax errors, so that's of no help.

@rnicholus
Member

Phew, finally made some headway on generating a V4 signature in PHP. JS may be a bit more complicated, due to my lack of knowledge of CryptoJS and AWS's invalid CryptoJS example code.

@rnicholus rnicholus added a commit that referenced this issue Apr 28, 2015
@rnicholus rnicholus feat(v4-signatures): V4 signature support for POST uploads
Includes changes to development S3 PHP signature server. These changes will need to be rolled into the examples in the server-examples repo as well before release.

In addition to writing a v4 test, I also cleaned up some tests in the simple-uploads spec by removing the flaky assertion-counting logic.

Doc updates to S3 options are included.

#1336
960e6e3
@rnicholus
Member

Wow, the new V4 REST request signing algorithm is substantially more complicated than the POST upload request method. The devs at Amazon really went out of their way to make this difficult to implement, and the complexity of the algorithm seems to be mostly unnecessary. It almost seems as if they created an unnecessarily complex signing process as an additional layer of security, which is of course a bad idea, but I can't fathom how the design of V4 signatures came about any other way. It's going to take some time to get this right, not to mention all of the server-side examples that will need to change.

@jetheredge

I might be misinterpreting the piece you are working on, but did you see the examples on this page? http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html

function getSignatureKey(key, dateStamp, regionName, serviceName) {

   var kDate= Crypto.HMAC(Crypto.SHA256, dateStamp, "AWS4" + key, { asBytes: true})
   var kRegion= Crypto.HMAC(Crypto.SHA256, regionName, kDate, { asBytes: true });
   var kService=Crypto.HMAC(Crypto.SHA256, serviceName, kRegion, { asBytes: true });
   var kSigning= Crypto.HMAC(Crypto.SHA256, "aws4_request", kService, { asBytes: true });

   return kSigning;
}

They use the crypto.js library as part of the signature creation: https://code.google.com/p/crypto-js/#HMAC

The library is BSD licensed, so maybe you can use parts of it?

@rnicholus
Member

@jetheredge That is part of the signing process of POST uploads. The process for REST (multipart PUT) uploads is much more complex. One concerning requirement of the signing process for uploads to the REST API involves hashing the entire request body. This will require reading the contents of each file into memory browser-side and generating a hash. The performance implications of this are substantial. I'm going to open up an issue in the S3 forums to ask them if they have any plans to allow payload hashing to be bypassed for upload PUT requests, and will report back when I know more.

@rnicholus
Member

Issue/question posted in the S3 forums at https://forums.aws.amazon.com/thread.jspa?threadID=179762.

@rnicholus
Member

Received a confusing response from AWS, claiming that the documentation does not indicate the payload needs to be hashed (even though the opening paragraph of the v4 signing docs explicitly mentions that this is required).

The V4 signature docs are a bit of a mess and there appears to be contradictory information in various areas. On top of this, it seems as if they are not consistent with their terms throughout the docs. The rep from AWS mentioned that the payload does not have to be hashed for multipart upload PUT requests, but this is not reflected in the documentation, and the rep did not link to such any documentation that backs up this assertion.

I followed up with a request for a specific link that points out that multipart upload PUT requests do not need to take into account the message body when generating the string to sign. The closest thing I can find is this signature v4 signing document, which mentions PUT requests, but also refers to "streaming" PUT requests, which is a confusing term in this context.

I'm hoping someone will shed some light on this. Until then, work on this case is blocked. Feel free to prod AWS on Twitter or via the forums. That may result in a better response than the one received so far.

@jetheredge

I've been meaning to follow up on this for a few days, but I think you are definitely correct in that at least the first piece of the file upload needs to be hashed, but then subsequent pieces look like they can be hashed using the returned hash from the previously uploaded part by looking at the 'X-Amz-Content-Sha256' header. Here is a link to the line in the v4 signer in the Amazon Javascript SDK which shows where they do this:

https://github.com/aws/aws-sdk-js/blob/master/lib/signers/v4.js#L175

The license for this library is Apache, so you should be safe in looking at the implementation.

@rnicholus
Member

The requirement you are describing is not documented anywhere, and I am
definitely hoping that file bytes do not need to be hashed, due to the
related complexity and performance concerns. According to this "chunked
upload" document at
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html, it
sounds like you can get away with only signing headers, provided one of the
headers is "x-amz-content-sha256" with a value of "
STREAMING-AWS4-HMAC-SHA256-PAYLOAD". Even though this document refers to
these uploads as "streaming" (which is confusing and probably not the
correct term). I'm likely going to attempt to implement v4 signatures with
the model described in the document I just linked to in mind.

On Thu, May 14, 2015 at 9:11 AM, Justin Etheredge notifications@github.com
wrote:

I've been meaning to follow up on this for a few days, but I think you are
definitely correct in that at least the first piece of the file upload
needs to be hashed, but then subsequent pieces look like they can be hashed
using the returned hash from the previously uploaded part by looking at the
'X-Amz-Content-Sha256' header. Here is a link to the line in the v4 signer
in the Amazon Javascript SDK which shows where they do this:

https://github.com/aws/aws-sdk-js/blob/master/lib/signers/v4.js#L175

The license for this library is Apache, so you should be safe in looking
at the implementation.


Reply to this email directly or view it on GitHub
#1336 (comment)
.

@jetheredge

Yeah, I thought sure I saw the behavior documented somewhere, but I am not able to find it now. Anyways, the document you linked to seems very clear that the approach you're taking is correct.

@rnicholus
Member

I did receive (somewhat of) an update from AWS today:

Hello rnicholus,

I do apologize if it appears that we have left this thread to languish--I can assure you, we are still working on getting clarification for you. I have pressed on our team for some expedience in this matter and I do apologize for the troubles you are seeing here.

Best regards,
Justin G.

@rnicholus
Member

I wonder if I can follow the model in http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html to send all requests with payloads in the multipart upload workflow (upload part and complete multipart) without hashing any payloads.

@rnicholus rnicholus added the 3 - Doing label May 15, 2015
@rnicholus
Member

After really pouring over the "streaming" upload document referenced above, I'm almost convinced that this is not the signature formula to follow when using the multipart upload REST API. In fact, a "streaming" upload appears to be quite different from a multipart upload. After reading through the entire document, this becomes clear. I'm afraid I'll just have to wait until I hear back from Amazon regarding the documentation specific to multipart uploads.

@rnicholus
Member

still waiting for a response from AWS, even though they have assured me they are working on it. Seems no one at Amazon understands how to send multipart upload requests with v4 signatures.

@rnicholus
Member

Another award-winning response from the crack staff at AWS, this time informing me that each file chunk must be hashed.

Let's go over a quick summary of the thread I opened in the AWS forums:

  • May 4: Me - "Hashing each file chunk will (substantially) negatively impact performance when uploading to S3, especially directly from the browser".
  • May 6: AWS - "You don't have to hash the file chunks. Nowhere does it say you have to do this".
  • May 6: Me - Points out multiple places in the docs where it says message body hashing is mandatory. Also asks about a streaming doc, which suggests there may be an alternative. Unsure how this relates to the multipart upload API.
  • May 14: AWS - "No one here seems to understand our own API or documentation. We're looking for someone who does, please wait".
  • June 1: AWS - "Yes, you must hash all file chunks."
@rnicholus rnicholus modified the milestone: 5.3.0, 5.4.0 Jul 6, 2015
@rnicholus
Member

Latest response from AWS circles back and claims that you do not have to take the file chunk bytes into account when signing each chunked request. At this point, I'm going to attempt to follow the file streaming doc when determining how to sign the PUT requests when using the multipart upload API. All other requests will be signed according to the v4 authenticating requests document.

I'm going to move this out of the blocked column (assuming I can actually proceed as I have just described), and schedule this for 5.4.0.

@rnicholus
Member

Work on this feature has resumed. I've successfully implemented the non-streaming v4 signing algorithm. This will be used for all non-PUT/upload requests when communicating with S3's multipart upload API. The streaming v4 signing process is a bit different, and will be the next to tackle.

Without a doubt, implementing v4 signing has been the most challenging task to date.

@rnicholus
Member

...pretty sure I will need to make a few changes to my existing code, such as sending the canonical request and the string to sign to the signature server. This way, the signature server can verify that the canonical request contains expected values. The server would also want to encode the canonical request to make sure it matches the last line in the string to sign.

@rnicholus rnicholus added a commit that referenced this issue Sep 4, 2015
@rnicholus rnicholus chore(merge): merge in develop 9f90f8a
@rnicholus
Member

May have to revisit the format of the content to be signed sent to the signature server to properly account for #1406 at a later date.

@rnicholus
Member

I'm currently in the process of implementing the logic required to sign multipart upload PUT/chunk requests. The only way to avoid hashing the entire chunk payload is to make use of their "streaming" v4 signature process. Unfortunately, it looks like this will prevent us from sending multiple chunks of the same file simultaneously, which means we can't support the concurrent chunking feature if the v4 signature process is used.

From the streaming doc:

...the chunk signatures are chained together; that is, signature of chunk n is a function F(chunk n, signature(chunk n-1)). The chaining ensures you send the chunks in correct order

The order of the chunks should not matter - I'm not sure why Amazon insists the chunks be sent sequentially, as this was never a requirement pre-v4 (and is the reason why Fine Uploader's unique concurrent chunking feature is possible w/ the S3 module).

I'm going to open up a case with AWS and see what they have to say about fixing this.

@rnicholus
Member

Sigh, sorry about this everyone. It hasn't been possible for me to concentrate on this for more than a couple hours a week or so as of late. The constant context-switching combined with the extreme technical difficulty of implementing AWS' v4 signature algorithm is making this very tough to complete in a reasonable amount of time.

Shortly into implementation of the "streaming" algorithm I was pointed to by AWS, I started to realize that this process literally requires all chunks be streamed. I take that to mean one request that remains open until all chunks are sent. It is not currently possible to "stream" data from the browser in this manner using XMLHttpRequest. I have been looking for a way to support V4 signatures using the multipart upload API, which allows each chunk to be sent in a separate request. In hindsight, the requirement that all chunks be streamed in a single request in order to avoid signing the payload explains why each chunk must be sent in order.

I'm going to have to reach out to AWS in hopes that there is some way to utilize the MPU API w/ v4 signatures without having to sign each chunk. If this is not possible, that I'll have to go back to square one with chunked uploads and evaluate the feasibility of hashing each chunk browser-side.

If anyone has a reliable contact at AWS, it would help me get to the bottom of this much quicker.

@rnicholus
Member

After looking over the issue tracker in the AWS S3 SDK, it appears that I will indeed need to hash the payload of each multipart upload chunk request. I opened yet another thread in the S3 forum asking for an explanation regarding the inconsistency in V4 requirements between chunked & non-chunked uploads, but I'm not optimistic I'll receive a good answer.

I'll have to move forward with the plan to hash each chunk browser-side. I'll probably want to do this in a web worker to avoid tying up the UI thread for long periods of time, though this will add more complexity.

@rnicholus rnicholus added a commit that referenced this issue Sep 25, 2015
@rnicholus rnicholus feat(request-signer): allow string-to-sign determination to be async
In preparation for hashing file chunks in webworkers to support V4 signatures using the multipart upload API.
#1336
e51e4f4
@arnoldad

Keep up the good work- I spent several hours today going through the same struggle. My team is waiting to buy a license when the v4 signatures are supported.

@rnicholus
Member

Thanks! It'll be slow going though. This is a major undertaking.
On Fri, Sep 25, 2015 at 7:08 PM Adam Arnold notifications@github.com
wrote:

Keep up the good work- I spent several hours today going through the same
struggle. My team is waiting to buy a license when the v4 signatures are
supported.


Reply to this email directly or view it on GitHub
#1336 (comment)
.

@ludofleury

I have (for the first time) to handle massive upload through HTTP, found your lib and this crazy thread. I just wanted to say that your work is quite impressive and rest assured that if your product fit my needs, we will invest in license asap

@rnicholus rnicholus added a commit that referenced this issue Oct 5, 2015
@rnicholus rnicholus feat(request-signer): functional chunked uploads to S3 using v4 sigs
TODO:
- unit tests
- DRY when handling signature responses
- move hashing to webworker
- grab non-minified cryptojs files
- handle failure to hash payload
#1336
bf5ede4
@rnicholus
Member

Chunked upload requests via the S3 multipart upload API using V4 signatures is now working in the features/s3-v4-signatures branch. I still have a bunch of TODOs to clean this up though. See the commit message of bf5ede4 for details.

@jetheredge

Congrats! I know this has been a looooong time coming. Maybe you can send some documentation over to the AWS team? :-)

@rnicholus
Member

Thanks! We're not out of the woods yet - still a few things to take care
of, including doc updates and updates to as many of the server examples as
I can address myself.

On Mon, Oct 5, 2015 at 5:56 PM Justin Etheredge notifications@github.com
wrote:

Congrats! I know this has been a looooong time coming. Maybe you can send
some documentation over to the AWS team? :-)


Reply to this email directly or view it on GitHub
#1336 (comment)
.

@rnicholus
Member

After some testing in multiple environments, it's not clear that the work and added complexity required to hash file chunks in web workers is important in terms of responsiveness. So, I plan on hashing chunks in the UI thread for the moment. If a need for this becomes clearer later, I will consider moving chunk hashing into web workers in a later release.

@rnicholus rnicholus added a commit that referenced this issue Oct 7, 2015
@rnicholus rnicholus feat(request-signer): use non-minified crypto deps
TODO:
- unit tests
- DRY when handling signature responses
- handle failure to hash payload
#1336
619b66e
@rnicholus rnicholus added a commit that referenced this issue Oct 7, 2015
@rnicholus rnicholus feat(request-signer): don't try to set Host header
TODO:
- unit tests
- DRY when handling signature responses
- handle failure to hash payload
#1336
e69a937
@rnicholus rnicholus added a commit that referenced this issue Oct 8, 2015
@rnicholus rnicholus feat(request-signer): DRY when handling signature responses
This should also add support for v4 signatures to the client-side signature workflow.
TODO:
- unit tests
- handle failure to hash payload
#1336
47200bc
@rnicholus rnicholus added a commit that referenced this issue Oct 8, 2015
@rnicholus rnicholus fix(request-signer): empty signatureConstructor param sent to signatu…
…re endpoint

TODO:
- unit tests
- handle failure to hash payload
#1336
3398418
@rnicholus rnicholus added a commit that referenced this issue Oct 8, 2015
@rnicholus rnicholus feat(request-signer): handle failure to hash payload
TODO:
- unit tests
#1336
d7085ae
@rnicholus
Member

Please let me know, anyone, if you are interested in testing V4 signature support. The code is usable at this point. All that remains is:

  • update docs
  • publish new PHP server example that handles v4 signatures
  • multipart upload API unit tests w/ v4 signatures
  • client-side signing workflow is support for v4 signatures
  • client-side signing v4 unit tests
  • update Java server example to handle v4 signatures
  • update node.js server example to handle v4 signatures
@rnicholus rnicholus added a commit that referenced this issue Oct 11, 2015
@rnicholus rnicholus feat(request-signer): append v4 query parameter to signature reqs
A query parameter will be included in the URI for any signature requests that is no v2.
#1336
90a14fc
@rnicholus rnicholus added a commit that referenced this issue Oct 12, 2015
@rnicholus rnicholus docs(s3): v4 signature support info
#1336
[skip ci]
594df5e
@rnicholus
Member

All S3 PHP server examples have been updated to support v4 signatures. These are currently sitting in a feature branch in the fineuploader/php-s3-server repo, but the changes will be merged into the mainline as part of the 5.4.0 release of Fine Uploader.

Until then, you can pull in the v4 signature version of the PHP example server (which also supports v2 singatures) via composer. A sample composer.json might look like this:

{
  "minimum-stability": "alpha",
  "repositories": [
    {
      "type": "vcs",
      "url": "https://github.com/FineUploader/php-s3-server"
    }
  ],
  "require": {
    "fineuploader/php-s3-server": "dev-v4-signatures"
  }
}
@rnicholus rnicholus added a commit that referenced this issue Oct 20, 2015
@rnicholus rnicholus test(v4 chunked): simple chunking test for v4 sigs
Also removed support for PhantomJS tests/headless tests due to issue with typed arrays in Phantom (ariya/phantomjs#11172). Firefox should be the default unit testing browser now.
#1336
6317159
@rnicholus
Member

I'll update the node.js example next, and then I'll release. Speak now if you want to test this out, as I don't plan to make any behavior changes to V4 signing once it's released.

@rnicholus rnicholus added a commit to FineUploader/server-examples that referenced this issue Oct 28, 2015
@rnicholus rnicholus feat(java): handle S3 V4 signature requests ed7de20
@rnicholus
Member

Node.js updated in FineUploader/server-examples@5fe7740. This feature is now complete, and is scheduled to go out with a few other changes in 5.4.0.

@rnicholus rnicholus added 5 - Done and removed 3 - Doing labels Oct 29, 2015
@ludofleury

👏 👏

@jetheredge

I would test this, but the project we were interested in this feature for has long since moved on and implemented in a different way. Unfortunately right now I don't have an easy way right now to setup and test this.

@kaurranjeet12

Hi rnicholus,

When are you planning to release fineuploader version 5.4?

Thanks,
RJ

@rnicholus
Member

After I finish the other tasks scheduled for 5.4. I also may make another
adjustment to the S3 v4 implementation.
On Mon, Nov 2, 2015 at 9:28 AM kaurranjeet12 notifications@github.com
wrote:

Hi rnicholus,

When are you planning to release fineuploader version 5.4?

Thanks,
RJ


Reply to this email directly or view it on GitHub
#1336 (comment)
.

@kaurranjeet12

Do you have any timeline as we are planning to buy the licence.
Email me on ranjoo12@gmail.com

Thanks,
RJ

@rnicholus
Member

I don't usually provide specific planned release dates. Keep checking back
for updates.
On Mon, Nov 2, 2015 at 10:02 AM kaurranjeet12 notifications@github.com
wrote:

Do you have any timeline as we are planning to buy the licence.

Thanks,
RJ


Reply to this email directly or view it on GitHub
#1336 (comment)
.

@e-tip
Contributor
e-tip commented Nov 3, 2015

Hi, i've used the develop version and i can confirm that, for non chunked version, it works perfectly.
I 'm facing issues with chunked upload but i guess it's because i still have to understand how it works ! thanks

@rnicholus
Member

What issues are you having
On Tue, Nov 3, 2015 at 8:29 AM e-tip notifications@github.com wrote:

Hi, i've used the develop version and i can confirm that, for non chunked
version, it works perfectly.
I 'm facing issues with chunked upload but i guess it's because i still
have to understand how it works ! thanks


Reply to this email directly or view it on GitHub
#1336 (comment)
.

@e-tip
Contributor
e-tip commented Nov 3, 2015

Thanks for your interest. As i said i need to change my php signer due to parameters changing when i enable chunk as i get this

object(stdClass)#1 (1) {
  ["headers"]=>
  string(136) "AWS4-HMAC-SHA256
20151103T141556Z
20151103/eu-central-1/s3/aws4_request
362cd6989ac712fe9384998e8b3860c7d43758d7a7bb53d5765a1e245fbfb133"
}

instead of what i've used with non chunked uploads... but i didn't read the amazon docs about that

@rnicholus
Member

I've described how to handle these requests in the docs, and have provided pre-release versions of PHP, Java, and NodeJS server-side examples that handle all v4 (and v2) requests. Have a look at these for more details.

@e-tip
Contributor
e-tip commented Nov 3, 2015

Thanks

@e-tip
Contributor
e-tip commented Nov 3, 2015

I've changed my signer copying the v4 part from the example but now amazon is not happy with one header

[Fine Uploader 5.4.0-5] Received response status 400 with body: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>XAmzContentSHA256Mismatch</Code><Message>The provided 'x-amz-content-sha256' header does not match what was computed.</Message><ClientComputedContentSHA256>4ea5c508a6566e76240543f8feb06fd457777be39549c4016436afda65d2330e</ClientComputedContentSHA256><S3ComputedContentSHA256>acee8d78a765fc20deeb7240c1361f15dfcf563b09141c88664f454ecf159d63</S3ComputedContentSHA256><RequestId>6A9EF29F43FB367D</RequestId><HostId>zKzyniveR3VpFm5wcJcdCB2csFVO5kHPKiO2UzPH5/CQH+YcPz1jbBWMiZdvcA4lB9sMRzwflHs=</HostId></Error>
@rnicholus
Member

@e-tip If you'd like me to assist, you'll need to provide a little more than an error message, such as information required to reproduce the issue.

@e-tip
Contributor
e-tip commented Nov 3, 2015

@rnicholus i can send you the whole project but zip files are not allowed here

@rnicholus
Member

Please do not send me the whole project. I'm simply looking for a set of steps and conditions with focused/relevant code fragments needed to reproduce your issue.

@e-tip
Contributor
e-tip commented Nov 3, 2015

ok, let's try
i've started a s3.fineUploader

this.options =  {
        s3_bucket : 'xxxxx-eu-upload',
        region : 'eu-central-1'
    }
this._fineUploader = new qq.s3.FineUploader({
            element: document.getElementById('fine-uploader-s3'),
            template: 'qq-template-s3',
            debug: true,
            dropZoneElements : [document.getElementById('fine-uploader-s3')],
            signature: {
                endpoint: "signChunk.php",
                version : 4
            },
            chunking : {
                enabled : true
            },
            objectProperties: {
                bucket: _self.options.s3_bucket,
                acl : 'public-read',
                region : _self.options.region
            },
            request: {
                accessKey : "AKIAINVUVMHTTQQXSDOQ",
                endpoint: "http://"+_self.options.s3_bucket+".s3-"+_self.options.region+".amazonaws.com"
            }/*,
            uploadSuccess: {
                endpoint: "success_upload.php"
            }*/
        });

i've added a file on the upload and the request to signChunk.php is ocmpleted, the upload begins and when it's time fo tinalize the first chunk i get the error i've reported

efine('S3_BUCKET', 'xxxxx-eu-upload');
define('S3_KEY',    'xxxxx');
define('S3_SECRET', 'xxxxx');
define('S3_REGION', 'eu-central-1');        // S3 region name: http://amzn.to/1FtPG6r
define('S3_ACL',    'public-read'); // File permissions: http://amzn.to/18s9Gv7

signRequest();

function signRequest() {
    header('Content-Type: application/json');
    $responseBody = file_get_contents('php://input');
    $contentAsObject = json_decode($responseBody, true);
    $jsonContent = json_encode($contentAsObject);
    $headersStr = $contentAsObject["headers"];
    if ($headersStr) {
        signRestRequest($headersStr);
    }
}

function signRestRequest($headersStr) {
    $response = array('signature' => signV4RestRequest($headersStr));
    echo json_encode($response);
}

function signV4RestRequest($stringToSign) {
    $pattern = "/.+\\n.+\\n(\\d+)\/(.+)\/s3\/.+\\n(.+)/";
    preg_match($pattern, $stringToSign, $matches);
    $dateKey = hash_hmac('sha256', $matches[1], 'AWS4' . S3_SECRET, true);
    $dateRegionKey = hash_hmac('sha256', $matches[2], $dateKey, true);
    $dateRegionServiceKey = hash_hmac('sha256', 's3', $dateRegionKey, true);
    $signingKey = hash_hmac('sha256', 'aws4_request', $dateRegionServiceKey, true);
    return hash_hmac('sha256', $stringToSign, $signingKey);
}

as you can see it's just a very basic project just to test if i'm able to get it work

@rnicholus
Member

Which browser is this failing in? Is it succeeding in any? What type of file are you uploading?

@e-tip
Contributor
e-tip commented Nov 3, 2015

I've tested with Chrome 46 , Firefox 41.0.2 and Safari 9.0.1 on a mac. I'm getting this error with all browsers, and with every file i try ( i've tried with a zip and a .avi file )

@rnicholus
Member

Seems to be working for me on those browsers, with large chunked files. Could be one of several issues on your end:

  • using wrong Fine Uploader JS file
  • browser extension causing issues
  • some other JS code causing issues

Is the failure happening on the first PUT request to S3, or the first POST request?

@e-tip
Contributor
e-tip commented Nov 3, 2015

The first POST request works. i got the error when the first chunk is uploaded
These are the files i've included ( i've downloaded it from github download button )

    <script src="js/util.js"></script>
    <script src="js/error.js"></script>
    <script src="js/version.js"></script>
    <script src="js/features.js"></script>
    <script src="js/promise.js"></script>
    <script src="js/blob-proxy.js"></script>
    <script src="js/button.js"></script>
    <script src="js/upload-data.js"></script>
    <script src="js/uploader.basic.api.js"></script>
    <script src="js/uploader.basic.js"></script>
    <script src="js/ajax.requester.js"></script>
    <script src="js/upload.handler.js"></script>
    <script src="js/upload.handler.controller.js"></script>
    <script src="js/form.upload.handler.js"></script>
    <script src="js/xhr.upload.handler.js"></script>
    <script src="js/window.receive.message.js"></script>
    <script src="js/uploader.api.js"></script>
    <script src="js/uploader.js"></script>
    <script src="js/templating.js"></script>
    <script src="js/s3/util.js"></script>
    <script src="js/non-traditional-common/uploader.basic.api.js"></script>
    <script src="js/s3/uploader.basic.js"></script>
    <script src="js/s3/request-signer.js"></script>
    <script src="js/uploadsuccess.ajax.requester.js"></script>
    <script src="js/s3/multipart.initiate.ajax.requester.js"></script>
    <script src="js/s3/multipart.complete.ajax.requester.js"></script>
    <script src="js/s3/multipart.abort.ajax.requester.js"></script>
    <script type="text/javascript" src="js/s3/s3.xhr.upload.handler.js"></script>
    <script type="text/javascript" src="js/s3/s3.form.upload.handler.js"></script>
    <script type="text/javascript" src="js/s3/uploader.js"></script>
    <script type="text/javascript" src="js/paste.js"></script>
    <script type="text/javascript" src="js/dnd.js"></script>
    <script type="text/javascript" src="js/deletefile.ajax.requester.js"></script>
    <script type="text/javascript" src="js/image-support/megapix-image.js"></script>
    <script type="text/javascript" src="js/image-support/image.js"></script>
    <script type="text/javascript" src="js/image-support/exif.js"></script>
    <script type="text/javascript" src="js/identify.js"></script>
    <script type="text/javascript" src="js/image-support/validation.image.js"></script>
    <script type="text/javascript" src="js/session.js"></script>
    <script type="text/javascript" src="js/session.ajax.requester.js"></script>
    <script type="text/javascript" src="js/form-support.js"></script>
    <script type="text/javascript" src="js/image-support/scaler.js"></script>
    <script type="text/javascript" src="js/third-party/ExifRestorer.js"></script>
    <script type="text/javascript" src="js/total-progress.js"></script>
    <script type="text/javascript" src="js/ui.handler.events.js"></script>
    <script type="text/javascript" src="js/ui.handler.click.filebuttons.js"></script>
    <script type="text/javascript" src="js/ui.handler.click.filename.js"></script>
    <script type="text/javascript" src="js/ui.handler.focusin.filenameinput.js"></script>
    <script type="text/javascript" src="js/ui.handler.focus.filenameinput.js"></script>
    <script type="text/javascript" src="js/ui.handler.edit.filename.js"></script>
    <script type="text/javascript" src="js/third-party/crypto-js/core.js"></script>
    <script type="text/javascript" src="js/third-party/crypto-js/enc-base64.js"></script>
    <script type="text/javascript" src="js/third-party/crypto-js/hmac.js"></script>
    <script type="text/javascript" src="js/third-party/crypto-js/sha1.js"></script>
    <script type="text/javascript" src="js/third-party/crypto-js/sha256.js"></script>
@rnicholus
Member

whoa, why are you doing all of this? You should only be importing one of the built files. grunt clean build, then use one of the files created in the _build dir.

@e-tip
Contributor
e-tip commented Nov 3, 2015

i had issues installing grunt on my machine. I'm trying now

@rnicholus
Member

Yes, grunt is garbage and I wish I wouldn't have used it in the first place. I may be able to remove it with some of the changes scheduled for 6.0.

You can also access this build at http://releases.fineuploader.com/develop/5.4.0-5/s3.fine-uploader-5.4.0-5.zip

@e-tip
Contributor
e-tip commented Nov 3, 2015

As you imagined, using the compiled file everything works.
So i can confirm that v4 support works perfectly. Good work !

@rnicholus
Member

Good to hear. There may be an update to chunked support for V4 before I release. Currently, Fine Uploader S3 sends the hashed header data to your signature server. This make verifying some aspects of the request server-side impossible. I'm going to see if I can simply send the canonical request w/ the hashed payload to the signature server instead. This will require a bit more work by the signature server, but allow signature servers to better verify the request before signing it.

@e-tip
Contributor
e-tip commented Nov 3, 2015

i’m following this thread for future updates and i’ll make tests if you need. Thanks for your great work

@rnicholus rnicholus added a commit that referenced this issue Nov 4, 2015
@rnicholus rnicholus feat(request-signer): send raw canonical request to sig server
This allows the signature server to properly inspect the request before signing it.
#1336
0bc8427
@rnicholus rnicholus added a commit to FineUploader/php-s3-server that referenced this issue Nov 4, 2015
@rnicholus rnicholus Update endpoint.php
feat(v4 signatures): handle raw canonical request for v4 REST signature requests
FineUploader/fine-uploader#1336
94da255
@rnicholus
Member

I made some changes to Fine Uploader S3, reflected in 5.4.0-6. Instead of sending the hashed canonical request to the signature server, I'm now sending the "raw" canonical request. This will allow signature servers to more definitively validate the request before signing it. Docs have been updated as well, along w/ Java, NodeJS, and PHP examples.

@e-tip
Contributor
e-tip commented Nov 5, 2015

Thanks, i'm updating it and i'll let you know. Is it possible to contact you directly to ask for a issue i'm facing with signature and custom file key ?

@e-tip
Contributor
e-tip commented Nov 5, 2015

Hi,
as previously said i'm facing some issues if i use a custom key on a chunked upload
this is the function i'm using to generate key

key : function(fileId){
                    var filename = "test/"+_self._fineUploader.getFile(fileId).name;
                    return filename;
                }

if the upload is not chunked everything is ok, but if there's a chunked upload i got signature mismatch error
if i change test/ with test_ it works everything
about the php signer, i've took it from the repo and used as is
[EDIT] after further investigations i can see that aws javascript library doesn't escape / instead your library does
this is the path used by aws library
http://bucketname-eu-upload.s3-eu-central-1.amazonaws.com/test/Archivio.zip?uploads
and this is the path used by fineuploader
http://bucketname-eu-upload.s3-eu-central-1.amazonaws.com/test%2FArchivio.zip?uploads

[EDIT2]
i've modified the

                urlSafe: function(id) {
                    //return encodeURIComponent(handler.getThirdPartyFileId(id));
                    return handler.getThirdPartyFileId(id);
                }

and in this way it works.

@rnicholus
Member

@e-tip It looks like this is not necessarily a new issue, but your change is likely to cause other issues. I'll look into ensuring that forward slashes in key names are not encoded.

@e-tip
Contributor
e-tip commented Nov 5, 2015

@rnicholus obviously my knowledge of the library is not deep enough to evaluate every consequence my changes caused ( and i'm not crazy enough to think i've not broke anything else :P ) by the way i'm working on a project that needs to be completed as soon as possible and chunked uploads on s3 are one of the key features, so i'm going to use it until you fix this. Thank you very much

@rnicholus
Member

Chunked uploads to S3 have worked in Fine Uploader for years now. This feature only adds support for v4 signatures.

I've added a patch to Fine Uploader to prevent forward slashes from being encoded in key names. You can find this update in 5.4.0-7.

@rnicholus
Member

Note that there was also a regression introduced into the PHP s3 server example yesterday (on the v4 sigs branch in that repo). I've also fixed that - it only applied to v2 sigs.

@e-tip
Contributor
e-tip commented Nov 6, 2015

@rnicholus me again... sorry for bothering you but i've found another little issues in v4 signature calculation if i use ( or ) in the filename they doesn't get escaped.
for example if the name of the file is "promo company (2015).mkv"
http://mybucket.s3-eu-central-1.amazonaws.com/0_06-11-2015-13_31_02/promo%20company%20(2015).mkv?uploads
and in this case i get SignatureDoesNotMatch error.
using aws client library the url for upload should be
https://mybucket.s3.eu-central-1.amazonaws.com/0_06-11-2015-13_31_02/promo%20company%20%282015%29.mkv?uploads
[EDIT] seems that even spaces creates problem. i've renamed the file in "promo company 2015" and i get the same issue. i'm considering to use the uuid instead of the filename and rename the file to it's original filename when i download it from s3, but someone else can not be so lucky

@rnicholus
Member

I'll take a look. The ( and ) characters don't need to be escaped. I'll look into this further though.

@rnicholus
Member

The complexity and silliness of the logic AWS engineers employ when designing their systems never ceases to frustrate me. It is perfectly valid for S3 keys to contain ( and ). But, the "canonical URI" portion of the canonical request used to generate a signature, which, in part, contains the object key, must contain escaped versions of (, ). I've made adjustments to account for spaces and brackets in 5.4.0-8.

@e-tip You're a pretty effective tester!

@rnicholus
Member

There are are couple things remaining before release of 5.4:

  • Verification of work on #1258.
  • Completion of work to address #1387

I'll also need to type up a blog post, as per usual. If #1387 isn't able to be completed within a few days, I'll push it to a future release. I'm aiming to release by 18 Nov. I am hesitent to release after the 18th since this is close to the U.S. Thanksgiving holiday, followed by a couple weeks of vacation by me at the start of December.

@e-tip
Contributor
e-tip commented Nov 11, 2015

@rnicholus just to add some difficulties to my project, i've tested it with v4 signature and cloudfront in front of my bucket. As i've seen in previous comments it's supported and in fact i get the upload to work, but i think that there's an issue ( obiously due to amazon's engineers that have overcomplicated the signature process ) with this setup if the file is chunked. As previously said if the upload is not chunked it works , if the upload is chunked i get signature error.
it' seems that for calculating the canonical string request s3 uses his host instead of the cloudfront host so it calculates a wrong signature.
my cf host is xxxxxxxxxx.cloudfront.net but in the s3 error message i see that canonical request contains

POST
/0_11-11-2015-11_48_10/840544-12.jpg
uploads=
host:xxxxxxxxxxx-us-1-upload.s3.amazonaws.com
x-amz-acl:public-read
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20151111T135754Z
x-amz-meta-qqfilename:840544-12.jpg

host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-meta-qqfilename
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

and looking at it you can clearly see the s3.amazonaws host

@rnicholus
Member

That sounds like a configuration issue with your CF distro. The best place to inquire further about that is the AWS forums. Unfortunately, the AWS support people manning those forums know very little about their products. Hopefully a user knowledgable in CF will provide some guidance.

@rnicholus
Member

Just a thought - I believe you will have to setup an Origin Access Identity in your CF distro. I think this will result in CF signing its request before then relaying it on to S3. Otherwise I can see how the signature calculated by S3 would be wrong due to the changing host header due to CF's involvement in the process. CloudFront may not support uploads to S3 using a version 4 signature in combination with an OAI though, due to lack of support for POST requests in this scenario.

Again, I'm not the correct person to ask about this stuff as my knowledge of CF is very limited.

@e-tip
Contributor
e-tip commented Nov 11, 2015

@rnicholus sorry for posted it here but i thought it was the correct place because changing

if (version === 4) {
            v4.getEncodedHashedPayload(requestInfo.content).then(function(hashedContent) {
                requestInfo.headers["x-amz-content-sha256"] = hashedContent;
                requestInfo.headers.Host = /(?:http|https):\/\/(.+)(?:\/.+)?/.exec(options.endpointStore.get(id))[1];
                requestInfo.headers["x-amz-date"] = qq.s3.util.getV4PolicyDate(now);
                requestInfo.hashedContent = hashedContent;

                promise.success(generateStringToSign(requestInfo));
            });
        }

to this

if (version === 4) {
            v4.getEncodedHashedPayload(requestInfo.content).then(function(hashedContent) {
                requestInfo.headers["x-amz-content-sha256"] = hashedContent;
                //requestInfo.headers.Host = /(?:http|https):\/\/(.+)(?:\/.+)?/.exec(options.endpointStore.get(id))[1];
                requestInfo.headers.Host = "xxxxxxxxxxx-us-1-upload.s3.amazonaws.com";
                requestInfo.headers["x-amz-date"] = qq.s3.util.getV4PolicyDate(now);
                requestInfo.hashedContent = hashedContent;

                promise.success(generateStringToSign(requestInfo));
            });
        }

make uploader work again... but i agree with you that this have no sense

@rnicholus
Member

@e-tip I'm a bit confused here with all of the "xxx"s

  1. What is the URL of your S3 bucket?
  2. What is the URL of your CF distro?
  3. What is the value of the host header in the original version of the code block above?
  4. What is the value of the host header after your changes?
@e-tip
Contributor
e-tip commented Nov 11, 2015

well wait... don't know what i've changed but now it works...

@rnicholus
Member

@e-tip There isn't anything I can do to help further without answers to the above questions. If you backtrack a bit, you will be able to uncover the necessary info.

@e-tip
Contributor
e-tip commented Nov 11, 2015

You are right

  1. my url of s3 bucket is studio-us-1-upload.s3.amazonaws.com
  2. The url of my cf distro is d1wvkk8w7itn1i.cloudfront.net
    do you mean the host header in the request of the signer or the header of upload to cf request ?
@rnicholus
Member

It sounds like, with your current config, you need to specify the bucket as your host header instead of the actual host of the CF distro. I would expect this to only be necessary if OAI is not setup on your CF distro. Again, the best place to inquire further is the AWS forums. I would hope that the request can be sent through CF without any special adjustments.

@rnicholus
Member

All tests are passing on all browsers and we're ready for the 5.4.0 release. I expect to push this out no later than mid-next-week.

@rnicholus
Member

closes #1495

@rnicholus
Member

Just noticed that uploads via a CDN are broken when using V4 multipart uploads. I'll have to add a new objectProperties.host option that must be filled out with the hostname of the S3 bucket when sending V4-signed uploads through a CDN. This will look very much like that objectProperties.bucket option added for initial CDN support. Work on this is in progress now.

@rnicholus
Member

CDN support is now fixed for V4-signed uploads.

@rnicholus rnicholus added a commit that closed this issue Nov 16, 2015
@rnicholus rnicholus chore(merge): Merge branch 'feature/s3-v4-signatures' into develop
# Conflicts:
#	README.md
#	client/js/version.js
#	package.json
closes #1336
0894907
@rnicholus rnicholus closed this in 0894907 Nov 16, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment