Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rgw cloud sync #18932

Merged
merged 127 commits into from Apr 13, 2018

Conversation

Projects
None yet
7 participants
@yehudasa
Copy link
Member

commented Nov 15, 2017

No description provided.

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Nov 15, 2017

Note that I'm not completely done with the work there. I still want to add some more flexible config.

hay, @yehudasa , we are testing your PR now. it's awesome feature, and seems almost done. what else do you want to do for this PR? we are very happy to help you if we could.

here are some questions:

  1. we should create a document about how to deploy this sync plugin.
  2. s3 limits number of buckets, a user could only create 10 buckets. should we support export all RGW data to one S3 bucket?
  3. s3 bucket name is globally unique. and this sync plugin 's bucket name mapping is fixed. what should we do if there is a existed bucket name?
  4. could we support a bucket-name-prefix config? we want this plugin supports tencent cloud's object store service whose api is almost compatible with S3 , except for one more limit: all the bucket name must begin with user's appid, likes "100001241-bucket1" "100001241-bucket2".

update:

sorry, I found that this plugin exports all files to only one S3 bucket.

FYI:

➜  build git:(master) ✗ s3cmd -c s3cfg ls
2017-11-14 09:59  s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7

➜  build git:(wip-export-to-s3) ✗ ./bin/radosgw-admin --rgw-zone=zone2 sync status --source-zone=zone1

          realm 9e4a325a-b839-4f68-b138-5ffeed0bcee6 (test-s3-export)
      zonegroup dac656c0-f09e-4cfe-8d71-cd69ecf8fed7 (s3-export)
           zone c71caf32-42f8-48c4-8160-90bd4ea5705f (zone2)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: cc274270-587f-42cb-b6d1-847303ab7d0c (zone1)
                        syncing
                        debug lc: 128 shards
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 1 shards
                        oldest incremental change not applied: 2017-11-14 18:07:27.0.683381s
int RGWCoroutinesStack::unwind(int retcode)
{
rgw_spawned_stacks *src_spawned = &(*pos)->spawned;

if (pos == ops.begin()) {
ldout(cct, 0) << "stack " << (void *)this << " end" << dendl;

This comment has been minimized.

Copy link
@liuchang0812

liuchang0812 Nov 15, 2017

Contributor

it would be better to use ldout(cct, 15).

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Nov 15, 2017

@yehudasa I'm not sure whether it's a bug. this plugin only create a bucket, do not update those objects. it seems that the rgwcoroutine is blocked in create bucket request(request is successfully, i saw this request in other rgw log), I added some debug log, and I could not see the debug log after create bucket request.

I removed the create bucket code as following, and got all object sync-ed to S3.

➜  build git:(master) ✗ s3cmd -c s3cfg ls  s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/1.txt
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/10.txt
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/2.txt
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/3.txt
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/4.txt
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/5.txt
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/6.txt
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/7.txt
2017-11-15 12:43      5864   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/8.txt
2017-11-15 12:43      1696   s3://rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7/lc/tests3/cmake_install.cmake
index e7e8e09..55010c3 100644                                                    
--- a/src/rgw/rgw_sync_module_aws.cc                                             
+++ b/src/rgw/rgw_sync_module_aws.cc                                             
@@ -761,6 +761,7 @@ public:                                                      
                                                                                 
   int operate() override {                                                      
     reenter(this) {                                                             
+      ldout(sync_env->cct, 0) << "DEBUGLC: Handle Remote Obj" << dendl;         
       ret = decode_attr(attrs, RGW_ATTR_PG_VER, &src_pg_ver, (uint64_t)0);      
       if (ret < 0) {                                                            
         ldout(sync_env->cct, 0) << "ERROR: failed to decode pg ver attr, ignoring" << dendl;
@@ -787,20 +788,27 @@ public:                                                    
       obj_path = bucket_info.bucket.name + "/" + key.name;                      
                                                                                 
       target_bucket_name = aws_bucket_name(bucket_info);                        
+      /*                                                                        
       if (bucket_created.find(target_bucket_name) == bucket_created.end()){     
+                                                                                
         yield {                                                                 
-          ldout(sync_env->cct,0) << "AWS: creating bucket" << target_bucket_name << dendl;
+          ldout(sync_env->cct,0) << "AWS: creating bucket: " << target_bucket_name << dendl;
           bufferlist bl;                                                        
           call(new RGWPutRawRESTResourceCR <int> (sync_env->cct, conf.conn.get(),
                                                   sync_env->http_manager,       
                                                   target_bucket_name, nullptr, bl, nullptr));
         }                                                                       
+        ldout(sync_env->cct,0) << "DEBUGLC: after coroution" << dendl;          
         if (retcode < 0) {                                                      
           return set_cr_error(retcode);                                         
         }                                                                       
+        ldout(sync_env->cct,0) << "DEBUGLC: AWS: create bucket successfully" << target_bucket_name << dendl;
                                                                                 
         bucket_created[target_bucket_name] = true;                              
       }                                                                         
+      */ 
@theanalyst

This comment has been minimized.

Copy link
Member

commented Nov 15, 2017

@liuchang0812 the code you pointed out only creates a bucket if there is no bucket already created, this shouldn't affect objects as such

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Nov 15, 2017

@Liuchang0812 the code you pointed out only creates a bucket if there is no bucket already created, this shouldn't affect objects as such

@theanalyst Thank you, It's my mistake that I did not copy complete diff.

the code I pointed out is in RGWAWSHandleRemoteObjCBCR, it creates bucket at first(as you said only if there is no bucket already created), then uploades the object. see :https://github.com/ceph/ceph/pull/18932/files#diff-748bb041ceab0c5fea818858731e02ebR820

as:

log1;  //  I can find this log
yield call{create bucket};  // create bucket successfully
log2;  //  I can't find this log
yield call{update object};  // object is not updated to S3, this line is not executed seems.

I'll test it tomorrow again. suggestions are appreciated!

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Nov 15, 2017

@liuchang0812 with regard to your comments:

I totally agree that a document needs to be created. I think in the #16284 PR there was a basic doc, can probably adapt from that one.
Currently it only puts everything in one bucket, but I want to make it flexible, so that we could specify different number of buckets to shard data into, and potentially have the ability to create ad-hoc configuration for different source buckets.
We should definitely need to be able to specify target bucket prefix. In general I think the create bucket names/prefixes need to be based on the sync instance id by default, but we should be able to modify that behaviour.
Note that we still lacking sync of objects extended metadata, and objects ACLs. There will need to be some way to map source ACLs into destination ACLs, and probably keep data about source ACLs in another extended attribute.

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Nov 15, 2017

@liuchang0812 could it be that there's an issue with bucket creation in your system? E.g., bucket cannot be recreated if already exists? @theanalyst I've seen an issue with us creating a bucket for every object we were syncing, not sure I fixed it.

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Nov 15, 2017

Hay, @yehudasa I used another ceph rgw as S3 service. RGW log said that creating bucket is succeessfully.

I noticed that AWS sync plugin will send a creat bucket requst as " sending request to http://localhost:8006/rgwxdac656c0f09e4cfe8d71cd69ecf8fed7ed7?rgwx-zonegroup=dac656c0-f09e-4cfe-8d71-cd69ecf8fed7". the rgwx-zonegroup argument is not used by S3,Should we remove it?

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Nov 15, 2017

@liuchang0812 yeah, we don't need that argument for these requests. Not sure how easy it would be to remove it in a clean way without getting the internal interfaces even uglier.

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Nov 16, 2017

@yehudasa @theanalyst I found why this sync plugin blocks in creating bucket. Because S3 service(ceph RGW) sends 100-continue response. It works after I disable that feature(via rgw_print_continue = false)

@Leeshine

This comment has been minimized.

Copy link
Contributor

commented Nov 16, 2017

@yehudasa , we shall remove the rgwx-zonegroup argument from the request to dest RGW( which simulate as a S3 service), otherwise we will get a 404 response from dest RGW, since the related zonegroup do not exist in dest RGW, the related log on dest RGW is as below :

2017-11-16 11:47:35.319700 7fcfb3c5b700  2 req 1:0.006131:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:normalizing buckets and tenants
2017-11-16 11:47:35.319737 7fcfb3c5b700 10 s->object=<NULL> s->bucket=rgwx9f6ee80c40e0430fbed41025fb502334334
2017-11-16 11:47:35.319782 7fcfb3c5b700  2 req 1:0.006213:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:init permissions
2017-11-16 11:47:35.319790 7fcfb3c5b700  2 req 1:0.006221:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:recalculating target
2017-11-16 11:47:35.319792 7fcfb3c5b700  2 req 1:0.006222:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:reading permissions
2017-11-16 11:47:35.319814 7fcfb3c5b700  2 req 1:0.006241:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:init op
2017-11-16 11:47:35.319838 7fcfb3c5b700  2 req 1:0.006269:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:verifying op mask
2017-11-16 11:47:35.319857 7fcfb3c5b700 20 required_mask= 2 user.op_mask=7
2017-11-16 11:47:35.319858 7fcfb3c5b700  2 req 1:0.006289:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:verifying op permissions
2017-11-16 11:47:35.320707 7fcfb3c5b700  2 req 1:0.007137:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:verifying op params
2017-11-16 11:47:35.320727 7fcfb3c5b700  2 req 1:0.007157:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:pre-executing
2017-11-16 11:47:35.320769 7fcfb3c5b700  2 req 1:0.007199:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:executing
2017-11-16 11:47:35.320874 7fcfb3c5b700  5 NOTICE: call to do_aws4_auth_completion
2017-11-16 11:47:35.320965 7fcfb3c5b700 20 get_system_obj_state: rctx=0x7fcfb3c53410 obj=zone1.rgw.meta:root:rgwx9f6ee80c40e0430fbed41025fb502334334 state=0x7fcfeac3fac0 s->prefetch_data=0
2017-11-16 11:47:35.320976 7fcfb3c5b700 10 cache get: name=zone1.rgw.meta+root+rgwx9f6ee80c40e0430fbed41025fb502334334 : miss
2017-11-16 11:47:35.321817 7fcfb3c5b700 10 cache put: name=zone1.rgw.meta+root+rgwx9f6ee80c40e0430fbed41025fb502334334 info.flags=0x0
2017-11-16 11:47:35.321830 7fcfb3c5b700 10 adding zone1.rgw.meta+root+rgwx9f6ee80c40e0430fbed41025fb502334334 to cache LRU end
2017-11-16 11:47:35.321882 7fcfb3c5b700  0 could not find zonegroup 9f6ee80c-40e0-430f-bed4-1025fb502334 in current period
2017-11-16 11:47:35.321893 7fcfb3c5b700 20 rgw_create_bucket returned ret=-2 bucket=rgwx9f6ee80c40e0430fbed41025fb502334334[])
2017-11-16 11:47:35.321909 7fcfb3c5b700  2 req 1:0.008339:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:completing
2017-11-16 11:47:35.322352 7fcfb3c5b700  2 req 1:0.008782:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:op status=-2
2017-11-16 11:47:35.322363 7fcfb3c5b700  2 req 1:0.008793:s3:PUT /rgwx9f6ee80c40e0430fbed41025fb502334334:create_bucket:http status=404
2017-11-16 11:47:35.322381 7fcfb3c5b700  1 ====== req done req=0x7fcfb3c53dc0 op status=-2 http_status=404 ======
2017-11-16 11:47:35.322417 7fcfb3c5b700 20 process_request() returned -2
@Leeshine

This comment has been minimized.

Copy link
Contributor

commented Nov 16, 2017

We have add a sub-class of RGWRESTConn that will not send this param in the request, shall we push the changes to this branch? @yehudasa

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Nov 16, 2017

@Leeshine you can send a PR against my branch

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Nov 16, 2017

@liuchang0812 it's not clear to me why a 100 response would affect anything. Did the 100 response reach the gateway? I think t should be hidden in the libcurl level.

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Nov 20, 2017

FYI, some information from "cr dump". https://gist.github.com/Liuchang0812/700fbc9bf1abdfa5a58f5dce979d1495

I added some logging, it seems that RGWPostRawRESTResourceCR::request_complete function is not invoked.

{
				"stack": "0x56256b581ed0",
				"run_count": 0,
				"ops": [{
					"description": "bucket sync single entry (source_zone=236a0ff0-b365-4edc-9a5c-79523144a769) b=thisisbucket:236a0ff0-b365-4edc-9a5c-79523144a769.24270.1/asdfsfdasfd[0] log_entry=00000000058.59.1 op=0 op_state=1",
					"type": "26RGWBucketSyncSingleEntryCRINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE11rgw_obj_keyE",
					"history": [{
						"timestamp": "2017-11-20 09:22:35.571201Z",
						"status": "init"
					}],
					"status": {
						"status": "syncing obj",
						"timestamp": "2017-11-20 09:22:35.572221Z"
					}
				}, {
					"type": "23RGWAWSHandleRemoteObjCR"
				}, {
					"type": "25RGWAWSHandleRemoteObjCBCR"
				}, {
					"type": "33RGWAWSStreamObjToCloudMultipartCR"
				}, {
					"type": "21RGWAWSInitMultipartCR"
				}, {
					"type": "24RGWPostRawRESTResourceCRIN4ceph6buffer4listEE"
				}]
			}
@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Nov 20, 2017

@liuchang0812 it sounds like an issue with RGWHTTPClient. Could be that the 100 response triggers the completion, but as it didn't really complete we end up hanging there. What rgw version are you running as your aws server?

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Nov 21, 2017

@yehudasa yeah, It is an issue with RGWTTPClient, pause variable is not initialized in RGWHTTPClient::send_http_data. libcurl's READDATAFUNC callback will not exit(return 0).

https://github.com/ceph/ceph/pull/18932/files#diff-700a0900e78a06de6e4b2236f823507aR190

size_t RGWHTTPClient::send_http_data(void *const ptr,
                                     const size_t size,
                                     const size_t nmemb,
                                     void *const _info)
{
  rgw_http_req_data *req_data = static_cast<rgw_http_req_data *>(_info);
  Mutex::Locker l(req_data->lock);
  if (!req_data->registered)
  {
    return 0;
  }
  bool pause; // <-- this is not initialized. 
  int ret = req_data->client->send_data(ptr, size * nmemb, &pause);  // <- this function will not modify pause
  if (ret < 0)
  {
    dout(0) << "WARNING: client->receive_data() returned ret=" << ret << dendl;
  }
  if (ret == 0 &&
      pause)
  {
    req_data->write_paused = true;
    return CURL_READFUNC_PAUSE;  // <-- this function will always return CURL_READFUNC_PAUSE, and blocks a curl_easy_handle
  }
  return ret;
}

I fixed this issue and disable Expect: 100-continue header, AWS sync-plugin seems works now.

but HTTP flow is broken if I did not disable Expect: 100-continue header. as flowing:

image

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Nov 21, 2017

Do you still have 'rgw_print_continue' disabled on the target rgw? This might be a problem.

@Leeshine

This comment has been minimized.

Copy link
Contributor

commented Nov 22, 2017

hay, @yehudasa , in order to support syncing data to tencent cloud's object store service(COS,whose api is almost compatible with S3 ),we are testing you branch rencently, and here are some questions concerned :

  1. S3 supports both virtual hosted-style and path-style, but COS only support virtual-style, while it seems that the RGW can only send path-style request,shall we support send virtual-style request from RGW?
  2. COS has an extra limit: all the bucket name must end up with user's appid, likes "bucket1-100001241" "bucket2-100001241", in this way, shall we support a bucket-name-suffix config?
  3. In the standard HTTP header (or in S3's document), the date's filed name should be "Date" rather than "DATE", authorization's filed name should be "Authorization" rather than "AUTHORIZATION", maybe we should fix it.
  4. In the standard HTTP header (or in S3's document), the date should format as defined by

    RFC 7231 Date/Time Formats, such as Wed, 22 Nov 2017 08:12:31 GMT, but the current date used iso8601's format. In this way, we will get a response like AWS authentication requires a valid Date or x-amz-date header from COS, shall we support RFC 7231's date format in RGW?

To solve these problems, we have do some modification on top of your branch, and now we can sync data from RGW to COS as expected, if you think these are widely problems and approve them , I will put a PR later.

@Leeshine

This comment has been minimized.

Copy link
Contributor

commented Nov 24, 2017

hay, @yehudasa, here we encountered a new problem :
after all parts of a object have sent via multipart upload, RGW will send a CompleteMultipartUpload post, which used the Transfer-Encoding: chunked, just as below:

POST /admin/samonlv/test10M?uploadId=1511495351fb2b090a5fa1a4767245403880cd0d47e3742b24fc30e34658151b9ce363536e HTTP/1.1
Host: 332256-1253596042.cos.ap-chengdu.myqcloud.com
Accept: */*
Transfer-Encoding: chunked
Authorization: AWS AKIDDG6kAMn1UhfNp4q9yc4LODn9H9l9qCqE:UW9y/V74tH2UFHsRdfve0JIfYM4=
Date: Fri, 24 Nov 2017 03:49:39 GMT
Expect: 100-continue

HTTP/1.1 100 Continue

93
<CompleteMultipartUpload><Part><PartNumber>1</PartNumber><ETag>&quot;81a86eae0f9276ac097b5aab400e4f4a&quot;</ETag></Part></CompleteMultipartUpload>
0

But in fact, S3 do not support Transfer-Encoding: chunked, if we send that post to S3, we will get a ErrorCode as NotImplemented, in this case, we shall add some options to avoid using Transfer-Encoding: chunked when sending request to S3.

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Nov 28, 2017

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Jan 8, 2018

@liuchang0812 note that your PRs were merged into my branch.

@liuchang0812

This comment has been minimized.

Copy link
Contributor

commented Jan 8, 2018

@yehudasa roger that, thank you. we will test your recent changes. syncing metadata is great.

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Jan 16, 2018

@liuchang0812 new config is in. What's left I think is acl mappings.

@yehudasa yehudasa force-pushed the yehudasa:wip-rgw-cloud-sync branch 2 times, most recently from 96497ba to 2c3ad4d Jan 23, 2018

yehudasa and others added some commits Feb 19, 2018

rgw: api adjustment following rebase
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
test/rgw: initial work on cloud sync test
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: cloud sync: store versioned epoch in target object
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: cloud sync: store source object info in destination object
store extra meta params on target object (original name, version_id, etag,
etc.)

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: fix parse_tier_config_param function
Signed-off-by: Chang Liu <liuchang0812@gmail.com>
rgw: coroutines: cancel stacks on teardown
If we don't cancel stacks, ops might not be destructed, so ops callbacks
could still be active.

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: don't call http data callbacks under lock
There is no need to hold req_data->lock when calling into client
callbacks. This removes an unneeded lock dependency (that is a
problem when cancelliing coroutines stack).

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: streaming put also stores content_type and other fields
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
test/rgw/test_multi: fix a few tests to only iterate over rw zones
Some of the tests require at least two read-write (regular rgw) zones

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
test/rgw: zone_cloud: deal with key representation and other fixes
Needed to present a key to the tests that reflected its original name
and version_id (and etag), so that the callers don't need to be modified.
However, this can only be achieved if we get the key, which doesn't work
if the caller was just listing the bucket objects. Created a new CloudKey
class to deal with the different issues there.
Also, other test related fixes.

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: don't store etag with extra null character at the end
head objects etag attr doesn't need to store an extra null char.

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: fixes following code review
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: etag fixes
Use string instead of bufferlist to avoid potential issues.

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: rename aws tier type to 'cloud'
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>

@yehudasa yehudasa force-pushed the yehudasa:wip-rgw-cloud-sync branch from 0193a9e to df645ae Apr 12, 2018

yehudasa added some commits Apr 5, 2018

doc/radosgw: cloud sync docs
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw: force last writer wins on marker writes
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>

@yehudasa yehudasa force-pushed the yehudasa:wip-rgw-cloud-sync branch from df645ae to 1034a68 Apr 12, 2018

yehudasa added some commits Apr 13, 2018

json_formattable: fix out of bounds array entity removal
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
rgw/tests_http_manager: fix initialization
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>

@cbodley cbodley merged commit df6d5b1 into ceph:master Apr 13, 2018

5 checks passed

Docs: build check OK - docs built
Details
Signed-off-by all commits in this PR are signed
Details
Unmodified Submodules submodules for project are unmodified
Details
make check make check succeeded
Details
make check (arm64) make check succeeded
Details
@cbodley

This comment has been minimized.

Copy link
Contributor

commented Apr 13, 2018

🎉 great work! 🎉

@yehudasa

This comment has been minimized.

Copy link
Member Author

commented Apr 15, 2018

Thanks to all who helped and contributed with this one, it's a real collaborative work. Note that I already have a few fixes on top of this one, and a couple of more issues that I'll work on in the next few days. Specifically there are some Amazon compatibility issues, multipart sync (that is: of larger objects) doesn't set acls and meta attributes correctly, and there is a problem with splice coroutine cancellation (e.g., when remote endpoint doesn't accept requests).

smithfarm added a commit to smithfarm/ceph that referenced this pull request Nov 6, 2018

librgw: initialize curl and http client for multisite
Fixes: http://tracker.ceph.com/issues/36302

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 1d44ba0)

Conflicts
    - src/rgw/librgw.cc
- remove rgw_http_client_init() and rgw_http_client_cleanup() calls because
  these methods came from ceph#18932 which
  wasn't backported to luminous

theanalyst added a commit to theanalyst/ceph that referenced this pull request Nov 22, 2018

librgw: initialize curl and http client for multisite
Fixes: http://tracker.ceph.com/issues/36302

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 1d44ba0)

Conflicts
    - src/rgw/librgw.cc
- remove rgw_http_client_init() and rgw_http_client_cleanup() calls because
  these methods came from ceph#18932 which
  wasn't backported to luminous
@huangmingyou

This comment has been minimized.

Copy link

commented Apr 18, 2019

some aws region s3 only support v4 signature ,but this module default use v2 signature , can add configure to choose signature version?

@cbodley

This comment has been minimized.

Copy link
Contributor

commented Apr 18, 2019

thanks @huangmingyou, as far as i know we haven't implemented the client side of v4 signatures. there's a feature request for this at http://tracker.ceph.com/issues/39138

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.