Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error about part sizes with Cloudflare R2 bucket #1137

Open
thedentist8 opened this issue May 22, 2024 · 6 comments
Open

Getting error about part sizes with Cloudflare R2 bucket #1137

thedentist8 opened this issue May 22, 2024 · 6 comments
Labels

Comments

@thedentist8
Copy link

I use tusd with Cloudflare R2 bucket as storage. When an upload is interrupted and resumed, I get an error "All non-trailing parts must have the same length". Looks like there is a limitation with the R2 buckets (which is not standard for S3), where they require all parts to be the same size.

My guess is that when the upload is resumed, calcOptimalPartSize calculates a different size for the part. Any suggestions on how to solve this and ensure that the part size will be always the same?

@Acconut
Copy link
Member

Acconut commented Jun 1, 2024

Yes, tusd might emit different part sizes as this is not prohibited by AWS S3 and also helps to optimize the upload to S3. For example, a 80 MB PATCH request with an configured optimal part size of 50 MB, yield a 50 MB and a 30 MB part upload to S3.

If you want tusd to emit parts to S3 with equal sizes (with only the last part having a different size), you can configure the minimum part size to be the same as the optimal part size. I would recommend a value of 20 - 50MB to start with. These configurations can be done either in the S3Store when you are using tusd programmatically or via the CLI flags.

Let me know if this helps, then we can add this to the documentation :)

@thedentist8
Copy link
Author

Makes sense, thanks!

Related question: if I set both to 20MB, and let's say the upload gets interrupted (due a bad network) after the first 15MB, will those 15MB get uploaded as a ".part" file so it can be re-used after resuming, or will that get completely dropped, and the client will start from 0? 

@Acconut
Copy link
Member

Acconut commented Jun 2, 2024

will those 15MB get uploaded as a ".part" file so it can be re-used after resuming

Yes, this partial part will be reused when the upload is resumed.

@wiemann
Copy link
Contributor

wiemann commented Sep 27, 2024

There is no minimum option for s3 parts in the CLI. Could you elaborate which options to set in order to get this to work?
I tried

-s3-part-size 52428800 \
-s3-max-buffered-parts 52428800 \

But the same error still occurs with Cloudflare R2.

status=500 body="ERR_INTERNAL_SERVER_ERROR: operation error S3: CompleteMultipartUpload, https response error StatusCode: 400, RequestID: , HostID: , api error InvalidPart: All non-trailing parts must have the same length.\n"

@Acconut
Copy link
Member

Acconut commented Sep 27, 2024

You are correct, there is no option to set the minimum S3 part size via the CLI. I was wrong in my earlier comment, sorry about that.

Would you be interested in opening a PR for adding options to control the min/max S3 part size?

@Acconut
Copy link
Member

Acconut commented Oct 22, 2024

The CLI flag was added in #1206 (thanks, @wiemann). I'll keep this issue open to not forget about updating the documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants