Gateway always return 403 #66

Closed
baldwinlouie opened this Issue Mar 28, 2013 · 10 comments

Projects

None yet

3 participants

@baldwinlouie

I am trying to access my cluster using s3cmd, s3curl and DragonDisk. I always get a 403 when I try to do a s3cmd ls, or some other operation. I've created users in the manager using create-user command. I also made sure that the endpoint table has the Gateway URL in it.

When I try doing s3cmd ls, I will get :
server:~/docomo/s3-curl$ s3cmd ls
ERROR: S3 error: 403 (Forbidden):

I followed the s3cmd documentation and put the Gateway url and port as the proxy information in .s3cfg

I've tried with users who have role_id=9 and role_id=1.

@essen
Contributor
essen commented Mar 28, 2013

Can you try:

add-bucket test-bucket 05236

Then use user 05236 with 4c2affb1eb9b58ca89fb key? You don't need to create it, it's always defined.

Note that bucket names can't have underscores, that might be your issue.

If that's still not working then better wait yosuke. :)

@baldwinlouie

Just tried it

add-bucket test-bucket 05236 OK get-buckets bucket | owner | created at ------------+-------------+--------------------------- backup | admin | 2013-03-27 05:52:00 +0000 bucket | _test_leofs | 2013-03-28 17:11:33 +0000 mine | admin | 2013-03-27 23:14:28 +0000 test | admin | 2013-03-27 22:14:08 +0000 test-bucket | _test_leofs | 2013-03-28 18:12:12 +0000

Then ran s3cmd with debug: https://gist.github.com/baldwinlouie/5265549

@essen
Contributor
essen commented Mar 28, 2013

Looks like you're hitting Amazon and not your own server?

DEBUG: Sending request method_string='GET', uri='http://s3.amazonaws.com/test-bucket/?delimiter=/'
@baldwinlouie

Yep. However, I am also using proxy_host and proxy_port which points to my own gateway server. From that, shouldn't the Gateway/Manager look at the endpoint table and resolve that s3.amazon.com points to my setup?

I've tried to set host_base to my Gateway as well. That also returned a 403.

I've also set the endpoint of the Gateway in the manager using set-endpoint.

I've tried DragonDisk also, and set the url as my Gateway url..Same error.

On Mar 28, 2013, at 11:17 AM, Loïc Hoguin wrote:

Looks like you're hitting Amazon and not your own server?

DEBUG: Sending request method_string='GET', uri='http://s3.amazonaws.com/test-bucket/?delimiter=/'

Reply to this email directly or view it on GitHub.

@essen
Contributor
essen commented Mar 28, 2013

My knowledge ends here, please wait for yosuke for further help. :)

@yosukehara
Member

It seems like you need to edit /etc/hosts in the client(s).

@baldwinlouie

Hi. I have LeoFS setup on multiple machines. 1 gateway, 2 managers, 3 storage nodes.

Do I set /etc/hosts on my testing machine, that is not part of LeoFS setup? or on all the machines that LeoFS is setup on. Can you give me a real use-case example?

Thank you

On Mar 28, 2013, at 4:07 PM, Yosuke Hara wrote:

It seems like you need to edit /etc/hosts in the client(s).

Reference:
Quick Start -1 All in one for Application Development
Amazon S3 API and Interface

Reply to this email directly or view it on GitHub.

@yosukehara
Member

An object of editing /etc/hosts is your client(s) such as installed dragon-disk's machine.
Also, regarding real use case, I'll share it, today. Please wait.

@baldwinlouie

thank you. also, is there documentation on the traditional rest api? I might want to try that to see if it will work for me.
On Mar 28, 2013, at 4:31 PM, Yosuke Hara wrote:

objects of editing /etc/hosts is your client(s) such as installed dragon-disk's machine.
Also, regarding real use case, I'll share it, today. Please wait.


Reply to this email directly or view it on GitHub.

@yosukehara
Member

Rest-API's document does not publish, yet. I'll share "real use case" and "rest api" in the LeoFS Manual. Please wait.

@yosukehara
Member

Let me share regarding LeoFS's use case, which is without load-balancer.

Next, I'm going to write "rest-api's material".

@yosukehara
Member

Let me share regarding REST-API's document as the follows:

And I have found REST-API's bug, I will fix next-version (0.14.1).

@baldwinlouie

Thank you Yosuke for the diagrams and REST-API. With the diagrams, I successfully performed CRUD operations against the whole cluster. I was able to use S3Curl and PHP library to make successful CRUD operations.

I was still unsuccessful using S3cmd. The only difference I see between S3cmd and S3curl is that in S3cmd, the x-amz-date is set. In S3curl and php library, only the "Date" header. Even if I try to explicitly set the Date header, I will still get 403. I'm sure I am doing something wrong.

I'm attaching my S3Curl CRUD commands in case someone else gets stuck on this problem

  1. First, in my Client machine, I mapped s3.amazonaws.com against the Gateway server.
In /etc/hosts

192.168.0.1 s3.amazonaws.com

S3Curl commands

List Items in a Bucket
./s3curl.pl --debug --id new_admin -- -s -v http://s3.amazonaws.com/admin/

Create Object
./s3curl.pl --contentType application/x-www-form-urlencoded --put=./file.txt --debug --id leo -- -s -v http://s3.amazonaws.com/admin/file.txt

Retrieve Object
./s3curl.pl --debug --id leo -- -s -v http://s3.amazonaws.com/admin/file.txt > t

Delete Object
./s3curl.pl --delete --debug --id leo -- -s -v http://s3.amazonaws.com/admin/file.txt

@yosukehara
Member

It seems like you need to execute create-user command on manager-console. So you can get "access-key" and "secret-access-key".

Then you need to make a s3cmd's configuration, which doc is here
Then you can make a bucket - "test" and input an object by using retrieved keys.

If you registered "test" as bucket by using "05236" on manager-console, you can use credentials as the follows:

access-key-id: "05236"
secret-access-key: "802562235"

Also, yesterday, I found Rest-api's bug (already fixed), so I'll provide next version (including it) at the early April.

Please retry.

@yosukehara
Member
@baldwinlouie

With S3cmd, I was unsuccessful even with "create-user". With PHP, S3Curl, and DragonDisk, I was successfully able to authenticate and list items from a bucket. (After create-user)

I did some additional debugging with S3Cmd. I "hacked" my copy S3cmd library so that it does not emit the x-amz-date header. Instead, I had it set the "date" header instead. After doing this, I was able to successfully issue S3cmd ls .

Attaching the debug of both S3cmd "hacked" and "non-hacked" https://gist.github.com/anonymous/5286974

@yosukehara yosukehara closed this Jun 18, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment