Conversation
add aws s3 signature v4, plz review. |
return Optional.absent(); | ||
} | ||
|
||
@SuppressWarnings("unchecked") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a lot of gratuitous reindentation.
@zhaojin0 Please address Checkstyle violations and remove gratuitous reindentation. I will look at this some more afterwards. |
Could you tag this commit with JCLOUDS-480 so that JIRA will note it? |
@zhaojin0 Any updates on this pull request? I would like to include it in the upcoming 1.9.0 release. |
I'm sorry for my lazy... public class AWSS3BlobStoreContextModule extends S3BlobStoreContextModule {
//...
@Override
protected void bindRequestSigner() {
bind(BlobRequestSigner.class).to(AWSS3BlobRequestSignerV4.class);
}
} |
@zhaojin0 I do not understand how this is supposed to work -- you should bind the V4 signer somewhere like bind(BlobRequestSigner.class).to(AWSS3BlobRequestSignerV4.class); I also do not understand the overall strategy here, but I believe that jclouds should use the new V4 signer for all AWS calls and regions. The generic S3 can continue to use the V2 code. I played around with this a bit and do not see any new live tests. I added one to public void testV4Region() throws Exception {
String bucketName = getScratchContainerName() + "-v4-only";
getApi().putBucketInRegion(Region.EU_CENTRAL_1, bucketName);
try {
assertEquals(Region.EU_CENTRAL_1, getApi().getBucketLocation(bucketName));
} finally {
destroyContainer(bucketName);
}
} However a see a variety of signing errors of the form:
Finally there are many spurious changes and Checkstyle violations in this pull request which make it hard to read. You must correct these before I can continue to review. As for the signer tests, the general ones in |
it's use for sign a temporary access... |
} | ||
String contentSha256 = base16().lowerCase().encode(hash(payloadStream)); | ||
try { | ||
payloadStream.reset(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens when the payload is not repeatable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
payload stream use calculate content hash.
if can not be repeatable, the payload cannot append to http request body.
Any plans to continue with this pull request? |
AWS Sign V4 use sha256 content hash. If payload can not be reset, aws supported chunked uploads. |
@zhaojin0 I am testing this and see many errors of the form:
when deleting items in the container between runs. Any suggestions on this? Also do all the integration tests pass for you? I see a few errors:
|
import java.io.IOException; | ||
|
||
import static org.testng.Assert.assertEquals; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spurious code movement?
It's seem as region eu-central-1 doesn't supported AWS sign V2. 在 2015/4/18 2:46, Andrew Gaul 写道:
赵金 北京优创联动科技有限公司 |
Hi, I impl aws s3 signer v4 chunked upload, use when put object, payload cannot repeatable. |
For our project, we took the PR, applied some minor fixes (zhaojin0#1) and backported it to stable 1.8.1. With our Integration tests, it works very well. Tested it against Frankfurt - which supports v4 only. |
public InputStream nextElement() { | ||
int bytesRead; | ||
try { | ||
bytesRead = inputStream.read(buffer, 0, buffer.length); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bytesRead can be less than buffer.length!
This caused problems during our tests, when the underlying inputStream only allows 8k blocks. The result was, that the pre-calculated content-length (using 64k chunks) was less than the actual send content length (using 8k chunks got from the InputStream).
That leads to HttpUrlConnection's "too many bytes written" Exception later on. ByteArrayInputStream and FileInputStream normally return the length as requested. So our integration tests didn't fail in the first place. When we live tested it, it failed because the inputStream we got from tomcat (from an upload) is using 8k blocks :(.
We fixed that on our side with a wrapping InputStream that always returns the requested length when read(buf,off,len) is invoked. Performing n additional reads from the underlying inputStream if necessary.
This way, the expected 64k blocks were send and the pre-calculated content-length matched the actual content length.
Maybe a loop must be added here to read exactly buffer.length bytes from the inputStream matching the chunkedBlockSize, like we did in our inputStream wrapper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can also call ByteStreams.readFully(InputStream, byte[])
.
I thought about the content-length again. The old V2 implementation wasn't able to handle unknown content-lengths. So does the V4. But I think it wouldn't be so difficult to add this. Because chunking is already implemented nicely and instead of sending the content-length of the complete payload, one could switch to transfer-encoding instead - like "Note"d here: http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html That would mean: Transfer-Encoding: chunked should do the trick and so omitting Content-Length completely. That would be really nice, since in our project we aren't always be able to get the complete What do you guys think? |
try Multipart Upload. |
The last time I tried Multipart Upload with jclouds aws-s3, it also forced me to specify the content length in advance. See my post here: https://www.mail-archive.com/user@jclouds.apache.org/msg01562.html. But that's another story. What is the state about this PR? What is missing before it could be merged? I think the build failure was because of a checkstyle error. |
Any reason for this not to have been merged? |
AFAIK this is reported against 1.8.x. I've applied this PR to a 1.8.1 fork without a problem. |
hey guys any progress on it? I hate that we have to use a patch version in production, but we already use it for some time and have load tested it as well. 👍 |
@andrewgaul Ping? Is this something you'd be able to take a look at? |
@demobox I will try to make a run at this early next week when I work on some related V4 issues. Sorry for the delay! |
AWS S3 signature v4 impl