-
Notifications
You must be signed in to change notification settings - Fork 9.2k
HADOOP-17415. Use S3 content-range header to update length of an object during reads #3939
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
💔 -1 overall
This message was automatically generated. |
6c22ebd to
638d559
Compare
|
🎊 +1 overall
This message was automatically generated. |
|
🎊 +1 overall
This message was automatically generated. |
638d559 to
89778a2
Compare
|
🎊 +1 overall
This message was automatically generated. |
dannycjones
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
|
We're closing this stale PR because it has been open for 100 days with no activity. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
|
I had completely missed this. yes, this would be great! |
Description of PR
As part of all the openFile work, knowing full length of an object allows for a HEAD to be skipped. But: code knowing only the splits don't know the final length of the file.
If the content-range header is used, then as soon as a single GET is initiated against an object, if the field is returned then we can update the length of the S3A stream to its real/final length
How was this patch tested?
Tested in
eu-west-1withmvn -Dparallel-tests -DtestsThreadCount=16 clean verify(AP tests failed from previous SDK upgrade)For code changes:
LICENSE,LICENSE-binary,NOTICE-binaryfiles?