New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 CopyObject presignedURL #1275
Comments
@4406arthur Sorry that you had to face this issue. Please help us understand your use case by providing some more information. Are you trying to download the object from s3 to your local system using "copy" command? Please correct me if I am wrong. Also please share with us the stack trace of the error that you are getting when you run the curl command. P.S. You can also look at sample code for "get" and "put" operation in s3 on this previous issue: #239 |
@imshashank , hi
no, I want to copy object in S3. In my use case the prefix of key just like path in file system.
and I can use presignedURL with put,get,delete,head expectedly. |
@4406arthur sorry for late reply. But it looks like the copysource needs to be a http://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#copyobject Please try it and let me know if you still face the issue. |
@imshashank , I prefix with bucket name, It cannot work also. |
@4406arthur What error did you get, can you please share. |
same as before. here is my presignedURL
<Error>
<Code>RequestTimeout</Code>
<Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message>
<RequestId>xxxxxxxx9E41506</RequestId>
<HostId>xxxxxxxxxxxxxBZTmK7yjpCKF81ho9cAeCc</HostId>
</Error> |
@4406arthur I used your code for region us-west-2 & ap-northeast-2 and it worked perfectly fine for me. Can you make sure that you have the appropriate file permission to be able to copy the given object? Can you please run this on some test file and try to copy that in the same folder and please share if you are still getting the error. P.S. for the curl command I just used the file_bytes as 0
|
@imshashank , with Content-Length 0 respond perfectly fine , I knew. but, does it really copy a object or just a empty object ? |
@4406arthur This is more of a curl requirement. Since you are copying object across buckets in S3, you are not really sending any file in the request. |
after trigger this CopyObject presignedURL with Content-Length 0, I will see a copy object in my s3 that is my expectation. |
While we continue to look in to if this is an issue specifically with the PHP SDK, we would like to encourage you to contact S3 support via one of the methods provided on the AWS Contact Us page. To speed up the process, you can include the |
@4406arthur Has there been any update on this issue? Did you try to open a ticket with the S3 team? |
After trying same thing on go-aws-sdk, I got the same result. |
Thanks for the response. Please let us know if you need help with anything else. |
@4406arthur It looks like the 'CopySource' parameter needs to be prefixed with the source bucket:
|
@kstich , I have tried before, my reply on May 18. |
@4406arthur Sorry, I must have missed that comment. We're working with the service team to get to a resolution. |
I've been struggling with the same issue for a few hours now. Presigned $client->getCommand('CopyObject', [
'ACL' => 'public-read',
'Bucket' => 'my-bucket',
'Key' => ltrim($copyPath, '/'),
'CopySource' => 'mybucket/'.ltrim($sourcePath),
]);
return (string) $client->createPresignedRequest($command, '+20 minutes')->getUri(); Using the URL gives me a response like this one with no body, but these headers: array(7) {
["x-amz-id-2"]=>
array(1) {
[0]=>
string(76) "fZBPxktTWxUPJTt7RgnWn814tsoX/BLbkSefTZeAZ6e1yRWKnrzzLzsXYQNsWruwkfJZJlFTfwc="
}
["x-amz-request-id"]=>
array(1) {
[0]=>
string(16) "E1487D0CFA838589"
}
["Date"]=>
array(1) {
[0]=>
string(29) "Mon, 16 Dec 2019 06:01:42 GMT"
}
["x-amz-server-side-encryption"]=>
array(1) {
[0]=>
string(6) "AES256"
}
["ETag"]=>
array(1) {
[0]=>
string(34) ""d41d8cd98f00b204e9800998ecf8427e""
}
["Content-Length"]=>
array(1) {
[0]=>
string(1) "0"
}
["Server"]=>
array(1) {
[0]=>
string(8) "AmazonS3"
}
} I have presigned requests for I've even tried changing the code to use the $client->copyObject([
'ACL' => 'public-read',
'Bucket' => 'my-bucket',
'Key' => ltrim($copyPath, '/'),
'CopySource' => 'mybucket/'.ltrim($sourcePath),
]); I also had AES256 encryption on by default. I tried a bucket without any encryption and I got the same result. I also tried with Trying to think of other things I can try. Everything I've done so far is the same. The object gets copied, but it's 0B in the new location. The code that consumes the URLs is just a simple Guzzle request like this:
Sorry for a bit of a necro. This is the only thing that comes up regarding this issue. |
Alright, so I solved the issue after a few more hours of debugging. The issue is that a presigned $client->getCommand('CopyObject', [
'ACL' => 'public-read',
'Bucket' => 'my-bucket',
'Key' => ltrim($copyPath, '/'),
'CopySource' => 'mybucket/'.ltrim($sourcePath),
]);
$request = $client->createPresignedRequest($command, '+20 minutes');
return [
'uri' => $request->getUri(),
'headers' => $request->getHeaders(),
]; And then your Guzzle code would look like this: $client->request('PUT', $request['uri'], ['headers' => $request['headers']]); Hope this helps others in the future since this isn't really well documented and |
context & Reproduce
The behavior on v3 aws-php-sdk would use Query Parameters (AWS Signature Version 4),
In the HTTP PUT header I just can only carry with Content-Length, will return timeout issue…...
I can work only carry Content-Length: 0, but it will upload a 0 size object.
Generate presignURL
then
The text was updated successfully, but these errors were encountered: