Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS4-HMAC-SHA256 Support #402

Closed
felixbuenemann opened this issue Oct 23, 2014 · 79 comments
Closed

AWS4-HMAC-SHA256 Support #402

felixbuenemann opened this issue Oct 23, 2014 · 79 comments

Comments

@felixbuenemann
Copy link

I'm trying to connect a bucket in the new eu-central-1 region (Frankfurt), but it seems it uses a newer authentication scheme that's not supported by 1.5.0-rc1:

Please wait, attempting to list bucket: s3://mybucket
WARNING: Redirected to: mybucket.s3.eu-central-1.amazonaws.com
ERROR: Test failed: 400 (InvalidRequest):
The authorization mechanism you have provided is not supported.
Please use AWS4-HMAC-SHA256.

Also note that the endpoint is named s3.eu-central-1.amazonaws.com (dot after s3 instead of dash).

@krzaczek
Copy link

+1

@wweich
Copy link

wweich commented Oct 24, 2014

+1
In the AWS Docs it states that every new region after Jan 30, 2014 will only support AWS Signature V4: http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

@felixbuenemann
Copy link
Author

Yeah, in the meantime you might be able to use awscli instead if you have to work with buckets in the new region.

@afirel
Copy link

afirel commented Oct 27, 2014

+1 for buckets in Frankfurt

2 similar comments
@DanteG41
Copy link

+1 for buckets in Frankfurt

@atomyuk
Copy link

atomyuk commented Oct 30, 2014

+1 for buckets in Frankfurt

@mludvig
Copy link
Contributor

mludvig commented Oct 30, 2014

I had a look and implementing the support for AWS4 signing method is not exactly straightforward.

I started some prep work at https://github.com/mludvig/s3cmd - extracted the existing v2 signing and prepared the landscape for v4 signing. Now only need someone to finish the v4 stuff. If you wait for me it may take a while :-(

@micheleorsi
Copy link

+1

@skynet
Copy link

skynet commented Nov 2, 2014

Well, too bad - what's the big deal about s3cmd when one can use aws-cli, anyways?

@smu
Copy link

smu commented Nov 3, 2014

+1

@Martijn02
Copy link

+1 for me too please

@koenpunt
Copy link

koenpunt commented Nov 4, 2014

I join @skynet, just use the aws cli:

aws s3 ls s3://my-bucket-name

@wweich
Copy link

wweich commented Nov 4, 2014

For listing content, ok.
But what about if I want to know how the size of the bucket? s3cmd du s3://my-bucket-name
If I upload a file with a script with aws s3api put-object and the upload fails it does not retry as s3cmd does. (s3api with put-object is necessary for server-side encryption)
I did not find an aws s3 plugin for munin which uses aws-cli but only ones using s3cmd.

@koenpunt
Copy link

koenpunt commented Nov 4, 2014

Take a look at: http://munin-monitoring.org/browser/munin-contrib/plugins/s3/s3_items

This uses a perl script to communicate with s3.

@koenpunt
Copy link

koenpunt commented Nov 4, 2014

And an alternative to du is:

aws s3 ls s3://my-bucket-name --recursive | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'

Which you can easily put in an alias.

(taken from: http://stackoverflow.com/a/21372023/189431)

@mdomsch
Copy link
Contributor

mdomsch commented Nov 4, 2014

With all the S3 regions switching to AWS4 in January, s3cmd will have to
grow this capability - Frankfurt was just the first and it caught us by
surprise. We'll have to push out 1.5.0-rc2 and final everywhere real soon
now - the ancient versions in the distros, embedded in beagleboard Debian
images, etc. will have to all get updated too. Newer s3cmd would be less
painful for users than switching to aws-cli (though there will be an
upgrade needed to move to either).

That's a kind of bitrot too. The problem is, we have very few s3cmd
maintainers (exactly one right now), which isn't enough to do justice to
the large community of s3cmd users. Michal chimed in above and started the
work to separate out the AWS2 signature work to make it easy to add in AWS4
signatures, but that's only a start. I'd love to review a well-written
patch series from another contributor (new or returning) to add AWS4
support. In the next few weeks though, if you're waiting on me to have the
time to do it, it could be a long wait.

On Tue, Nov 4, 2014 at 4:29 AM, Koen Punt notifications@github.com wrote:

And an alternative to du is:

aws s3 ls s3://my-bucket-name --recursive --region eu-west-1 | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'

Which you can easily put in an alias.

(taken from: http://stackoverflow.com/a/21372023/189431)


Reply to this email directly or view it on GitHub
#402 (comment).

@mdomsch
Copy link
Contributor

mdomsch commented Nov 4, 2014

http://docs.aws.amazon.com/general/latest/gr/signature-v4-test-suite.html
would be helpful for anyone writing the AWS4 support.

On Tue, Nov 4, 2014 at 6:41 AM, Matt Domsch matt@domsch.com wrote:

With all the S3 regions switching to AWS4 in January, s3cmd will have to
grow this capability - Frankfurt was just the first and it caught us by
surprise. We'll have to push out 1.5.0-rc2 and final everywhere real soon
now - the ancient versions in the distros, embedded in beagleboard Debian
images, etc. will have to all get updated too. Newer s3cmd would be less
painful for users than switching to aws-cli (though there will be an
upgrade needed to move to either).

That's a kind of bitrot too. The problem is, we have very few s3cmd
maintainers (exactly one right now), which isn't enough to do justice to
the large community of s3cmd users. Michal chimed in above and started the
work to separate out the AWS2 signature work to make it easy to add in AWS4
signatures, but that's only a start. I'd love to review a well-written
patch series from another contributor (new or returning) to add AWS4
support. In the next few weeks though, if you're waiting on me to have the
time to do it, it could be a long wait.

On Tue, Nov 4, 2014 at 4:29 AM, Koen Punt notifications@github.com
wrote:

And an alternative to du is:

aws s3 ls s3://my-bucket-name --recursive --region eu-west-1 | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'

Which you can easily put in an alias.

(taken from: http://stackoverflow.com/a/21372023/189431)


Reply to this email directly or view it on GitHub
#402 (comment).

@mdomsch
Copy link
Contributor

mdomsch commented Nov 4, 2014

http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python

On Tue, Nov 4, 2014 at 6:50 AM, Matt Domsch matt@domsch.com wrote:

http://docs.aws.amazon.com/general/latest/gr/signature-v4-test-suite.html
would be helpful for anyone writing the AWS4 support.

On Tue, Nov 4, 2014 at 6:41 AM, Matt Domsch matt@domsch.com wrote:

With all the S3 regions switching to AWS4 in January, s3cmd will have to
grow this capability - Frankfurt was just the first and it caught us by
surprise. We'll have to push out 1.5.0-rc2 and final everywhere real soon
now - the ancient versions in the distros, embedded in beagleboard Debian
images, etc. will have to all get updated too. Newer s3cmd would be less
painful for users than switching to aws-cli (though there will be an
upgrade needed to move to either).

That's a kind of bitrot too. The problem is, we have very few s3cmd
maintainers (exactly one right now), which isn't enough to do justice to
the large community of s3cmd users. Michal chimed in above and started the
work to separate out the AWS2 signature work to make it easy to add in AWS4
signatures, but that's only a start. I'd love to review a well-written
patch series from another contributor (new or returning) to add AWS4
support. In the next few weeks though, if you're waiting on me to have the
time to do it, it could be a long wait.

On Tue, Nov 4, 2014 at 4:29 AM, Koen Punt notifications@github.com
wrote:

And an alternative to du is:

aws s3 ls s3://my-bucket-name --recursive --region eu-west-1 | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'

Which you can easily put in an alias.

(taken from: http://stackoverflow.com/a/21372023/189431)


Reply to this email directly or view it on GitHub
#402 (comment).

@wweich
Copy link

wweich commented Nov 4, 2014

@koenpunt thanks for the tips.
Unfortunately the munin plugin you suggested uses s3curl which doesn't support API V4 either.

@madeinhamburg
Copy link

+1

1 similar comment
@ghost
Copy link

ghost commented Nov 8, 2014

+1

@wachtelbauer
Copy link

Hello Sirs.
Try this. No Python no libraries, just CURL in an ubuntu instance.
https://github.com/wachtelbauer/linux-shell-scripts/blob/master/S3-AWS4-Upload.sh
Got it working yesterday.
I still have to add retrieving of the MD5 hash for upload verification.
Hope this helps someone.
Regards
Friedhelm Budnick

@nomadicj
Copy link

Yup. +1. :/

@scottemackenzie
Copy link

+1 - Matt, do you know when you guys will release rc2 with v4 support?

@mdomsch
Copy link
Contributor

mdomsch commented Nov 12, 2014

Someone needs to actually write it...

One question I have in the v4 support - how can we know the region a bucket
is in, which must be included in the v4 signature? One can get it by
calling info(), but you need to know the region to sign the info() request.
:-(

On Wed, Nov 12, 2014 at 4:44 AM, scottemackenzie notifications@github.com
wrote:

+1 - Matt, do you know when you guys will release rc2 with v4 support?


Reply to this email directly or view it on GitHub
#402 (comment).

@nomadicj
Copy link

Could just do the protocol dance like SSL does. Start at the most secure auth and work down to the least.

On Nov 12, 2014, at 06:33, Matt Domsch notifications@github.com wrote:

Someone needs to actually write it...

One question I have in the v4 support - how can we know the region a bucket
is in, which must be included in the v4 signature? One can get it by
calling info(), but you need to know the region to sign the info() request.
:-(

On Wed, Nov 12, 2014 at 4:44 AM, scottemackenzie notifications@github.com
wrote:

+1 - Matt, do you know when you guys will release rc2 with v4 support?


Reply to this email directly or view it on GitHub
#402 (comment).


Reply to this email directly or view it on GitHub.

@wweich
Copy link

wweich commented Nov 12, 2014

In aws cli you have to set the region in the config file or provide it via command line parameter

@scottemackenzie
Copy link

AWS has provided me with this response to the question provided by Matt, "how can we know the region a bucket is in, which must be included in the v4 signature?" Does this help?

============ AWS Response ===================
Based on our document, at this time, existing AWS regions continue to support the previous protocol, Signature Version 2. Any new regions after January 30, 2014 will support only Signature Version 4. So AWS Beijing region and Frankfurt region only support Version 4. Since all regions support Version 4, it is better to use it for all regions.

To determine location of a bucket, you can use API call “GET Bucket location”. Syntax is:

GET /?location HTTP/1.1
Host: BucketName.s3.amazonaws.com
Date: date
Authorization: authorization string

For your code part, if you use the AWS SDKs to send your requests, you don't need to make changes since the SDK clients authenticate your requests by using access keys that you provide. In regions that support both signature versions, you can request AWS SDKs to use specific signature version.

If you are implementing the AWS Signature Version 4 algorithm in your custom client, you can express authentication information by using either “HTTP Authorization header” or “Query string parameters”.

For more details, please refer to the links below:
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html

@nomadicj
Copy link

Unless we are going to using some kind of lookup which will inevitably be
stale real quick, we need to dynamically determine what to use on the fly.

On 12 November 2014 10:26, scottemackenzie notifications@github.com wrote:

AWS has provided me with this response to the question provided by Matt,
"how can we know the region a bucket is in, which must be included in the
v4 signature?" Does this help?

============ AWS Response ===================
Based on our document, at this time, existing AWS regions continue to
support the previous protocol, Signature Version 2. Any new regions after
January 30, 2014 will support only Signature Version 4. So AWS Beijing
region and Frankfurt region only support Version 4. Since all regions
support Version 4, it is better to use it for all regions.

To determine location of a bucket, you can use API call "GET Bucket
location". Syntax is:

GET /?location HTTP/1.1
Host: BucketName.s3.amazonaws.com
Date: date
Authorization: authorization string

For your code part, if you use the AWS SDKs to send your requests, you
don't need to make changes since the SDK clients authenticate your requests
by using access keys that you provide. In regions that support both
signature versions, you can request AWS SDKs to use specific signature
version.

If you are implementing the AWS Signature Version 4 algorithm in your
custom client, you can express authentication information by using either
"HTTP Authorization header" or "Query string parameters".

For more details, please refer to the links below:

http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html

http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html

Reply to this email directly or view it on GitHub
#402 (comment).

@mdomsch
Copy link
Contributor

mdomsch commented Nov 12, 2014

The problem is, the authorization string noted here, in version 4, contains
the region that you're contacting. Chicken-and-egg.

I hope one can issue the Get Bucket Location call to any region (e.g.
s3.amazonaws.com) and get back the right location for the bucket,
regardless of where it actually, and then use that returned location from
then on. It winds up being one extra call per bucket being worked with on
any s3cmd invocation.

On Wed, Nov 12, 2014 at 12:26 PM, scottemackenzie notifications@github.com
wrote:

AWS has provided me with this response to the question provided by Matt,
"how can we know the region a bucket is in, which must be included in the
v4 signature?" Does this help?

============ AWS Response ===================
Based on our document, at this time, existing AWS regions continue to
support the previous protocol, Signature Version 2. Any new regions after
January 30, 2014 will support only Signature Version 4. So AWS Beijing
region and Frankfurt region only support Version 4. Since all regions
support Version 4, it is better to use it for all regions.

To determine location of a bucket, you can use API call “GET Bucket
location”. Syntax is:

GET /?location HTTP/1.1
Host: BucketName.s3.amazonaws.com
Date: date
Authorization: authorization string

For your code part, if you use the AWS SDKs to send your requests, you
don't need to make changes since the SDK clients authenticate your requests
by using access keys that you provide. In regions that support both
signature versions, you can request AWS SDKs to use specific signature
version.

If you are implementing the AWS Signature Version 4 algorithm in your
custom client, you can express authentication information by using either
“HTTP Authorization header” or “Query string parameters”.

For more details, please refer to the links below:

http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html

http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html


Reply to this email directly or view it on GitHub
#402 (comment).

@boeboe
Copy link

boeboe commented Nov 19, 2014

I was using the wrong regio. Apparently Frankfurt is eu-central-1 i.s.o. eu-west-1.

@felixbuenemann
Copy link
Author

@vamitrou I've done some limited testing with ls and sync against a bucket in eu-central-1 and so far it seems to work fine.

@kwo
Copy link

kwo commented Dec 2, 2014

+1 for buckets in Frankfurt

@koenpunt
Copy link

koenpunt commented Dec 2, 2014

I'm sure @kwo is rooting for the support in s3cmd. Also no seed to repeat comments, that will pollute comment threads..

@mdomsch
Copy link
Contributor

mdomsch commented Dec 15, 2014

OK everyone. With huge thanks to Vasileios Mitrousis (vamitrou) who wrote
the V4 signature code, I've now merged this into upstream master branch.
Give it a go and report success/failures.

Thanks,
Matt

On Mon, Dec 15, 2014 at 4:52 AM, Jan Raasch notifications@github.com
wrote:

+1 for ffm


Reply to this email directly or view it on GitHub
#402 (comment).

@mdomsch
Copy link
Contributor

mdomsch commented Dec 15, 2014

The work is merged to upstream master branch. Closing. Please open a new bug on any failures you encounter.

@mdomsch mdomsch closed this as completed Dec 15, 2014
@mdomsch
Copy link
Contributor

mdomsch commented Dec 15, 2014

We were getting a bunch of location redirects (which are handled, but which slow down requests). Using --region wasn't enough to avoid the redirects because cfg.host_base and cfg.host_bucket were still defaulting from the .cfg file. I've pushed a couple patches to github.com/mdomsch/s3cmd bug/region-endpoints branch so that we can avoid redirects using --region, or if we do get a redirect, we get only one. Please give that a try and if good, I'll merge.

@dsjoerg
Copy link

dsjoerg commented Jan 26, 2015

@mdomsch you wrote "With all the S3 regions switching to AWS4 in January, s3cmd will have to
grow this capability" — do you have a reference for that? I have been unable to find any official docs that claim that S3 regions are switching to AWS4.

Thanks in advance!

@mdomsch
Copy link
Contributor

mdomsch commented Jan 26, 2015

http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

"Amazon S3 supports Signature Version 4, a protocol for authenticating
inbound API requests to AWS services, in all AWS regions. At this time,
existing AWS regions continue to support the previous protocol, Signature
Version 2. Any new regions after January 30, 2014 will support only
Signature Version 4 and therefore all requests to those regions must be
made with Signature Version 4."

This includes at least Frankfurt (eu-central-1).

On Mon, Jan 26, 2015 at 4:20 PM, David Joerg notifications@github.com
wrote:

@mdomsch https://github.com/mdomsch you wrote "With all the S3 regions
switching to AWS4 in January, s3cmd will have to
grow this capability" — do you have a reference for that? I have been
unable to find any official docs that claim that S3 regions are switching
to AWS4.

Thanks in advance!


Reply to this email directly or view it on GitHub
#402 (comment).

@dsjoerg
Copy link

dsjoerg commented Jan 27, 2015

Thanks @mdomsch, feel much better now. If I'm reading this right, no regions are switching per se, and regions that currently support AWS Sig V2 will continue to do so. (Note they are referring to last January not this upcoming January).

For my own very limited purposes this is great news. But I certainly understand why software such as s3cmd (thank you it's awesome!) must start supporting AWS Sig V4 now.

@Ajaxy
Copy link

Ajaxy commented Jul 15, 2015

Has it been finally released? I'm facing the same error.

@felixbuenemann
Copy link
Author

@Ajaxy According to the merge date, this should be included in v1.5.0 upwards.

@Ajaxy
Copy link

Ajaxy commented Jul 23, 2015

Seems that it's missed in package from apt-get.

@felixbuenemann
Copy link
Author

The chance that you get anything recent using apt-get is rather slim, unless you use a custom source.

@mdomsch
Copy link
Contributor

mdomsch commented Jul 23, 2015

It's in very new Ubuntu. Otherwise, install it from pip or from github.

On Thu, Jul 23, 2015 at 2:20 PM, Felix Bünemann notifications@github.com
wrote:

The chance that you can anything recent in apt-get is rather slim ;-)


Reply to this email directly or view it on GitHub
#402 (comment).

@JensRantil
Copy link

@Ajaxy @felixbuenemann I can confirm this patch did not make into 1.5.0-alpha1 as I experienced The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. with that version. pip install --upgradeing to 1.6.0 fixed it for me.

@mdomsch In the future it would help a lot of you could make sure an issue refers to a GIT commit (or pull request) before closing an issue. It took me 30 minutes of detective work on my side to figure this out. Simply looking up which tags contained a commit would have been so much easier... ;)

@felixbuenemann
Copy link
Author

@JensRantil Well, I meant v1.5.0 (released in January), not v1.5.0-alpha1 (released in 2013).

@ustun
Copy link

ustun commented Mar 29, 2016

Still facing this error with v1.6.1

@felixbuenemann
Copy link
Author

You probably forgot to specify the region, it's mandatory for AWS4 auth.

@ngtuna
Copy link

ngtuna commented Oct 5, 2016

You probably forgot to specify the region, it's mandatory for AWS4 auth.

How to ? I specified region but still facing the error:

$  aws --region us-east-1 s3 cp <src> s3://<dest>

A client error (InvalidRequest) occurred when calling the CreateMultipartUpload operation: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

@fviard
Copy link
Contributor

fviard commented Oct 5, 2016

@ngtuna "aws s3" client is another tool, here it is the project "s3cmd" that your run with commands like:
$ s3cmd cp ....

@ngtuna
Copy link

ngtuna commented Oct 5, 2016

@fviard ah my mistake. Also I just figured out I install wrong version of aws s3 cli.

@harryghgim
Copy link

harryghgim commented Feb 18, 2020

I solved this issue by simply adding my bucket region in my settings.py file. I used Django and boto3. For example,
AWS_S3_REGION_NAME = "ap-northeast-2"
should work.

@ponthos
Copy link

ponthos commented Dec 29, 2021

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests