Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rgw: S3: set EncodingType in ListBucketResult #7712

Merged
merged 1 commit into from Mar 30, 2016

Conversation

vitek
Copy link
Contributor

@vitek vitek commented Feb 19, 2016

Set EncodingType to "url" in ListBucketResult reply if urlencoding was requested. This fixes client side key decoding. For instance, boto3 python library doesn't decode keys if no EncodingType is provided.

@jbweber
Copy link
Contributor

jbweber commented Mar 4, 2016

After upgrading to a newer version of aws cli recently I ran into this issue where the original patch to introduce encoding-type stopped working. It looks like the way encoding is handled was changed to not always run the decode.

I built 0.94.6 and applied your patch, but it didn't fix the issue. I traced it to the fact there are two methods in the radosgw which return a response, one for a versioned response and one for a non-versioned reponse. The same change needs to be applied around line 306 and the problem was resolved for me.

@vitek vitek force-pushed the listbucket_encoding_type_fix branch from ae6706d to b1f4aeb Compare March 4, 2016 07:57
@vitek
Copy link
Contributor Author

vitek commented Mar 4, 2016

Hi Jeff,

Thanks for review, I've updated my commit to handle versioned response.

@yehudasa
Copy link
Member

@vitek can you add Signed-off-by to the commit message?

Signed-off-by: Victor Makarov <vitja.makarov@gmail.com>
@vitek vitek force-pushed the listbucket_encoding_type_fix branch from b1f4aeb to d2e281d Compare March 23, 2016 11:57
@vitek
Copy link
Contributor Author

vitek commented Mar 23, 2016

@yehudasa done

jbweber referenced this pull request Mar 29, 2016
This change introduces handling for the encoding-type request
parameter on the get bucket operation. An object key may contain
characters which are not supported in XML. Passing the value "url" for
the encoding-type parameter will cause the key to be urlencoded in the
response.

Fixes: #12735
Signed-off-by: Jeff Weber <jweber@cofront.net>
@robbat2
Copy link
Contributor

robbat2 commented Mar 29, 2016

@liewegas Since Yehuda is on vacation, can you please merge this for Jewel and backport to Hammer.
AWS-CLI discussion of the breakage here:
aws/aws-cli#1879

@liewegas liewegas added this to the jewel milestone Mar 30, 2016
@liewegas liewegas merged commit 1ac3444 into ceph:master Mar 30, 2016
@gmihaiescu
Copy link

gmihaiescu commented May 19, 2016

Any idea why this regression bug fix was not pulled into Ceph 0.94.7 ?

Thank you.

@ayang99
Copy link

ayang99 commented May 19, 2016

This is a big deal. It really should have been rolled out in the last Hammer release.

It really, really should have.

@liewegas
Copy link
Member

Sorry, we missed this one! There is another RGW bug fix that needs to go out quickly, so we'll do another hammer point release as soon as it is merged (next few days).

@gmihaiescu
Copy link

Great news, thanks a lot Sage.
I assume the other bug fix is for http://tracker.ceph.com/issues/15886

@Xavion
Copy link

Xavion commented Jun 4, 2016

Guys, I'm well beyond being fed up with waiting for this trivial bug to be fixed. I cannot understand how something so significant can receive such a low priority. If it isn't fixed (and implemented in DreamObjects) by next Sunday (June 12), I will be filing a formal complaint.

@ayang99
Copy link

ayang99 commented Jun 29, 2016

crickets

@ayang99
Copy link

ayang99 commented Sep 13, 2016

Hi there,

So, apparently, this fix was pushed via this ticket in tracker

We are currently running 0.94.9 which supposedly contains this fix but are finding that AWS S3 commands are still not behaving correctly.

For example:

$ aws s3 --endpoint-url https://object.myradosgw.org:9080 cp s3://my.bucket.src/data/ s3://my.bucket.dest/data/ --exclude "" --include "eb08" --recursive

We want to copy all the files in s3://my.bucket.src/data into s3://my.bucket.dest/data if the file begins with 'eb08'. However, the cp command fails with the following errors:

copy failed: s3://my.bucket.src/data%2F00b2a66a-8bad-58f3-8674-dca412d8b846 to s3://my.bucket.dest/data/2F00b2a66a-8bad-58f3-8674-dca412d8b846 An error occurred (NoSuchKey) when calling the CopyObject operation: Unknown
copy failed: s3://my.bucket.src/data%2F00b13cae-8761-56db-bcdd-4030c36148be to s3://my.bucket.dest/data/2F00b13cae-8761-56db-bcdd-4030c36148be An error occurred (NoSuchKey) when calling the CopyObject operation: Unknown
copy failed: s3://my.bucket.src/data%2F00b1eebe-15b4-586d-a76a-a8701af83073 to s3://my.bucket.dest/data/2F00b1eebe-15b4-586d-a76a-a8701af83073 An error occurred (NoSuchKey) when calling the CopyObject operation: Unknown
<---8<---snip--->

The copy operation is not interpreting the prefix slash properly. While it's not clear whether the NoSuchKey is happening on the source or destination side, it is definitely not handling the target path construction properly. There is something odd in the source path handling as well, since the files that are being "copied" do not satisfy the --include condition. They are the first three files that occur in the container lexicographically (kind of - the command listed them out of order - because of multiple child threads?)

The cp command works properly for a single file copy:

$ aws s3 --endpoint-url https://object.myradosgw.org:9080 cp s3://my.bucket.src/data/3cbc006a-966f-5a79-b384-241393f92454 s3://my.bucket.dest/data/

copy: s3://my.bucket.src/data/3cbc006a-966f-5a79-b384-241393f92454 to s3://my.bucket.dest/data/3cbc006a-966f-5a79-b384-241393f92454

The above command works as expected against regular AWS S3 buckets.

While I've commented on issue 15896, I also opened a new ticket: 17272.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
8 participants