Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[C++] Error when writing files to S3 larger than 5 GB #24550

Closed
asfimport opened this issue Apr 7, 2020 · 4 comments
Closed

[C++] Error when writing files to S3 larger than 5 GB #24550

asfimport opened this issue Apr 7, 2020 · 4 comments
Assignees
Labels
Milestone

Comments

@asfimport
Copy link

When purely using the arrow-cpp library to write to S3, I get the following error when trying to write a large Arrow table to S3 (resulting in a file size larger than 5 GB):

../src/arrow/io/interfaces.cc:219: Error ignored when destroying file of type N5arrow2fs12_GLOBAL__N_118ObjectOutputStreamE: IOError: When uploading part for key 'test01.parquet/part-00.parquet' in bucket 'test': AWS Error [code 100]: Unable to parse ExceptionName: EntityTooLarge Message: Your proposed upload exceeds the maximum allowed size with address : 52.219.100.32

I have diagnosed the problem by looking at and modifying the code in s3fs.cc. The code uses multipart upload, and uses 5 MB chunks for the first 100 parts. After it has submitted the first 100 parts, it is supposed to increase the size of the chunks to 10 MB (the part upload threshold or part_upload_threshold_). The issue is that the threshold is increased inside DoWrite, and DoWrite can be called multiple times before the current part is uploaded, which ultimately causes the threshold to keep getting increased indefinitely, and the last part ends up surpassing the 5 GB part upload limit of AWS/S3.

This issue where the last part is much larger than it should I'm pretty sure can happen every time a multi-part upload exceeds 100 parts, but the error is only thrown if the last part is larger than 5 GB. Therefore this is only observed with very large uploads.

I can confirm that the bug does not happen if I move this:

if (part_number_ % 100 == 0) {
   part_upload_threshold_ += kMinimumPartUpload;}}
}

and do it in a different method, right before the line that does: ++part_number_

 

Reporter: Juan Galvez
Assignee: Antoine Pitrou / @pitrou

PRs and other links:

Note: This issue was originally created as ARROW-8365. Please see the migration documentation for further details.

@asfimport
Copy link
Author

Antoine Pitrou / @pitrou:
Thanks for the thorough report and diagnosis!

@asfimport
Copy link
Author

Antoine Pitrou / @pitrou:
[~jjgalvez] I've submitted #6864 . Can you check whether the diff looks good to you?

@asfimport
Copy link
Author

Juan Galvez:
Done. Thanks!

@asfimport
Copy link
Author

Antoine Pitrou / @pitrou:
Issue resolved by pull request 6864
#6864

@asfimport asfimport added this to the 0.17.0 milestone Jan 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants