Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add radosgw support #18

Merged
merged 1 commit into from
Feb 23, 2015
Merged

Add radosgw support #18

merged 1 commit into from
Feb 23, 2015

Conversation

leseb
Copy link
Member

@leseb leseb commented Feb 19, 2015

Signed-off-by: Sébastien Han sebastien.han@enovance.com

@leseb
Copy link
Member Author

leseb commented Feb 19, 2015

@Ulexus in this config, I assume that the key of the rgw(s) will be shared across all the gateway.
Also I keep that key in the rgw data directory.

Just mentioning this, as I believe it can be argued :).

Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
@leseb
Copy link
Member Author

leseb commented Feb 19, 2015

raw starts but I'm getting this:

2015-02-19 22:44:01.819553 7f02717fa700 -1 failed to list objects pool_iterate returned r=-2
2015-02-19 22:44:01.819562 7f02717fa700 0 ERROR: lists_keys_next(): ret=-2
2015-02-19 22:44:01.819568 7f02717fa700 0 ERROR: sync_all_users() returned ret=-2
2015-02-19 22:44:01.958525 7f0263fff700 0 ERROR: FCGX_Accept_r returned -9

Doesn't look that good.

@Ulexus
Copy link
Contributor

Ulexus commented Feb 20, 2015

I haven't actually run radosgw yet, so I don't know the possible failures, but since its complaining about not being able to list the objects in the pool, I'm going to guess that it's failing authentication (key not available?), failing authorization (the rgw key doesn't have access to that pool?), failing to be told which pool to look in (is that configured somewhere?), or the ceph cluster doesn't actually have the pool it is looking for (typo?).

@leseb
Copy link
Member Author

leseb commented Feb 20, 2015

The weird thing is that rgw is able to create all the necessary pools to work. So I don't think it's an authentication issue.

@Ulexus
Copy link
Contributor

Ulexus commented Feb 20, 2015

I get the following:
$ docker run --rm -p 80:80 -v /etc/ceph:/etc/ceph -e RGW_NAME=cycore ceph/radosgw
2015-02-20 15:38:29.017311 7f67b81de7c0 0 ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7), process radosgw, pid 71
2015-02-20 15:38:38.210728 7f67b81de7c0 0 framework: fastcgi
2015-02-20 15:38:38.210735 7f67b81de7c0 0 starting handler: fastcgi
2015-02-20 15:38:38.216662 7f6773fff700 0 ERROR: can't read user header: ret=-2
2015-02-20 15:38:38.216670 7f6773fff700 0 ERROR: sync_user() failed, user=xxxxxxx ret=-2

Requests seem to work:
2015-02-20 15:38:41.631595 7f67637fe700 1 ====== starting new request req=0x7f676800b180 =====
2015-02-20 15:38:41.632828 7f67637fe700 1 ====== req done req=0x7f676800b180 http_status=200 ======
2015-02-20 15:38:41.938713 7f6762ffd700 1 ====== starting new request req=0x7f67680101e0 =====
2015-02-20 15:38:41.939925 7f6762ffd700 1 ====== req done req=0x7f67680101e0 http_status=404 ======

I'm still figuring out how to actually use it, though.

@leseb
Copy link
Member Author

leseb commented Feb 20, 2015

Can you store objects?
How did you test it?

@leseb leseb self-assigned this Feb 20, 2015
@Ulexus
Copy link
Contributor

Ulexus commented Feb 20, 2015

Yep; objects store and retrieve; I can make buckets, etc. Everything appears to work.

I ran it with the command I listed above:
docker run --rm -p 80:80 -v /etc/ceph:/etc/ceph -e RGW_NAME=cycore ceph/radosgw

@Ulexus
Copy link
Contributor

Ulexus commented Feb 20, 2015

$ s3cmd ls s3://SCM
$ s3cmd put wf1.pcap s3://SCM 
wf1.pcap -> s3://SCM/wf1.pcap  [1 of 1]
 18550 of 18550   100% in    0s   100.74 kB/s  done
$ s3cmd ls s3://SCM
2015-02-20 16:33     18550   s3://SCM/wf1.pcap
$ s3cmd get s3://SCM/wf1.pcap /tmp/deleteme.pcap
s3://SCM/wf1.pcap -> /tmp/deleteme.pcap  [1 of 1]
 18550 of 18550   100% in    0s   138.61 kB/s  done

@Ulexus
Copy link
Contributor

Ulexus commented Feb 20, 2015

Interestingly, I seem to get different errors each time I start radosgw:

2015-02-20 17:11:30.017653 7f2ec91107c0  0 ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7), process radosgw, pid 71
2015-02-20 17:11:39.286338 7f2ea12f5700  0 ERROR: can't get key: ret=-2
2015-02-20 17:11:39.286352 7f2ea12f5700  0 ERROR: sync_all_users() returned ret=-2

This latest time, I got no errors:

2015-02-20 17:15:13.006965 7fc42d82b7c0  0 ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7), process radosgw, pid 71
2015-02-20 17:15:22.199794 7fc42d82b7c0  0 framework: fastcgi
2015-02-20 17:15:22.199809 7fc42d82b7c0  0 starting handler: fastcgi

@Ulexus
Copy link
Contributor

Ulexus commented Feb 20, 2015

And then back to errors...

In every case, radosgw has worked, though, so I'm not sure what the errors are all about.

I take it those don't normally show up, running it outside of Docker?

@leseb
Copy link
Member Author

leseb commented Feb 20, 2015

Hum this is weird, I just tried with "debug_rgw = 20", and this doesn't say much.
Actually I just saw these errors on radosgw virtual machine.

@leseb
Copy link
Member Author

leseb commented Feb 20, 2015

Hum sorry I'm a bit struggling to get s3cmd working apparently...
I just wonder, do we need to support this option too? "rgw dns name"

Looks quite useful.

I created a user and re-use the key and secret.
Here's my s3cmd config file:

[default]
access_key = J6I1CKIAR5Y2GORB7Z2F
bucket_location = US
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = coreos.novalocal
host_bucket = %(bucket)s.coreos.novalocal
human_readable_sizes = False
invalidate_on_cf = False
list_md5 = False
log_target_prefix =
mime_type =
multipart_chunk_size_mb = 15
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
recursive = False
recv_chunk = 4096
reduced_redundancy = False
secret_key = NUGY7DTzbD4EFDObxC2Mlh8\/G81yMyOvLrFFj419
send_chunk = 4096
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
urlencoding_mode = normal
use_https = False
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html

These 2 options/host can be resolved and point to my docker rgw:

host_base = coreos.novalocal
host_bucket = %(bucket)s.coreos.novalocal

Then while trying to create a bucket, I get:

$ s3cmd mb s3://bucket
WARNING: Retrying failed request: / ([Errno -2] Name or service not known)
WARNING: Waiting 3 sec...
^CSee ya!

Not sure what's missing... Would be nice if you could shed some light.

@Ulexus
Copy link
Contributor

Ulexus commented Feb 22, 2015

I have no idea why, but the page I found which described it says to use all
caps for the bucket name, and that worked.

On Fri, Feb 20, 2015, 13:30 Leseb notifications@github.com wrote:

Hum sorry I'm a bit struggling to get s3cmd working apparently...
I just wonder, do we need to support this option too? "rgw dns name"

Looks quite useful.

I created a user and re-use the key and secret.
Here's my s3cmd config file:

[default]
access_key = J6I1CKIAR5Y2GORB7Z2F
bucket_location = US
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = coreos.novalocal
host_bucket = %(bucket)s.coreos.novalocal
human_readable_sizes = False
invalidate_on_cf = False
list_md5 = False
log_target_prefix =
mime_type =
multipart_chunk_size_mb = 15
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
recursive = False
recv_chunk = 4096
reduced_redundancy = False
secret_key = NUGY7DTzbD4EFDObxC2Mlh8/G81yMyOvLrFFj419
send_chunk = 4096
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
urlencoding_mode = normal
use_https = False
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html

These 2 options/host can be resolved and point to my docker rgw:

host_base = coreos.novalocal
host_bucket = %(bucket)s.coreos.novalocal

Then while trying to create a bucket, I get:

$ s3cmd mb s3://bucket
WARNING: Retrying failed request: / ([Errno -2] Name or service not known)
WARNING: Waiting 3 sec...
^CSee ya!

Not sure what's missing... Would be nice if you could shed some light.


Reply to this email directly or view it on GitHub
#18 (comment).

@leseb
Copy link
Member Author

leseb commented Feb 22, 2015

No problem, if it works for you then you can merge this :)

@Ulexus
Copy link
Contributor

Ulexus commented Feb 23, 2015

Acutally, I can't; I don't have write access to this repository

@leseb
Copy link
Member Author

leseb commented Feb 23, 2015

Arf, ok I got it, you are going to have that soon.
Waiting until then :)

@leseb
Copy link
Member Author

leseb commented Feb 23, 2015

@Ulexus you should be able to merge this :)

Ulexus added a commit that referenced this pull request Feb 23, 2015
@Ulexus Ulexus merged commit 670c2ff into master Feb 23, 2015
@Ulexus
Copy link
Contributor

Ulexus commented Feb 23, 2015

Excellent; thanks!

@leseb leseb deleted the radosgw branch December 17, 2015 17:59
mkkie pushed a commit to mkkie/ceph-container that referenced this pull request Nov 17, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants