Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[S3] "Failed to copy: multipart upload failed to upload part" using Cloudflare provider #6193

Closed
lpellegr opened this issue May 23, 2022 · 13 comments

Comments

@lpellegr
Copy link

Using the rclone latest beta from 2022/05/23, file upload using the Cloudflare R2 provider are failing. It seems to happen with medium sized file (not all file uploads).

Here is the error:

Transferring:

  •                                     a01.1: 13% /446.693Mi, 14.998Mi/s, 25
    

Failed to copy: multipart upload failed to upload part: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your secret access key and signing method.
status code: 403, request id: , host id:

and the rclone configuration used:

[r2]
type = s3
provider = Cloudflare
access_key_id = XXX
secret_access_key = YYY
endpoint = https://zzz.r2.cloudflarestorage.com
region = auto

Downgrading to an older beta version solves the issue. Here is a working beta version:
https://beta.rclone.org/branch/fix-5422-s3-putobject/v1.59.0-beta.6122.7a0cdbc45.fix-5422-s3-putobject/rclone-v1.59.0-beta.6122.7a0cdbc45.fix-5422-s3-putobject-linux-amd64.deb

It seems the error started to happen between May 21 and May 22.

@lpellegr lpellegr changed the title Failed to copy to R2: multipart upload failed to upload part [S3] "Failed to copy: multipart upload failed to upload part" using Cloudflare provider May 23, 2022
@vlovich
Copy link

vlovich commented May 23, 2022

Probably some code path degrades to using presigned URLs would be my guess.

@ncw
Copy link
Member

ncw commented May 23, 2022

Can you post

  • a log with -vv --dump headers of the problem happening

Downgrading to an older beta version solves the issue. Here is a working beta version:

That is surprising as I don't think there are any relevant changes... You said the problem is intermittent? Are you sure it never happens with the old version?

It seems to happen with medium sized file (not all file uploads).

What size of file? Are they bigger than --s3-upload-cutoff ? If so it is using multipart upload.

I suspect this is something to do with retries on errors not being signed properly, or the error being mis-reported anyway.

@lpellegr
Copy link
Author

The issue no longer happens with the latest beta. I am closing the issue for now and will reopen with detailed logs or create a new one if a problem happens again. Thanks for your help.

@vedantroy
Copy link

vedantroy commented Sep 28, 2022

@ncw I am running into this issue with rclone version:

rclone v1.58.0-beta.5838.074234119
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-47-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.2
- go/linking: static
- go/tags: none

will post log shortly

@vedantroy
Copy link

@ncw
Copy link
Member

ncw commented Oct 3, 2022

@vedantroy we've been exploring this issue in https://forum.rclone.org/t/uploading-large-files-to-r2-250-mib-with-rclone-causes-signature-errors/33267

It seems like R2 doesn't like so much concurrency and that --s3-upload-concurrency 2 has been fixing the problem.

Can you try that?

@dalbitresb12
Copy link

@ncw Just had the same problem on the latest stable version of rclone:

rclone v1.59.2
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.15.0-1016-gcp (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.18.6
- go/linking: static
- go/tags: none

I managed to upload the file (659MB) with the switch you mentioned, --s3-upload-concurrency 2, but it did decrease upload speed (got a maximum of ~5.5MiB/s from a GCP instance). If you need any logs or help testing, let me know.

@ncw
Copy link
Member

ncw commented Oct 11, 2022

@dalbitresb12 Rclone could decrease the concurrency for r2 by default. Worth changing?

@dalbitresb12
Copy link

@ncw Is there anyway we could keep a little bit more of concurrency so that upload speeds aren't affected that much?

Changing the default sounds fine for me, just asking if we can increase upload speeds.

@ncw
Copy link
Member

ncw commented Oct 11, 2022

Its up to cloudflare really to fix their backend....

But experiment with --s3-upload-concurrency 3 (4 is the default). Also increase --s3-chunk-size is likely to improve performance at the cost of using more memory.

@jpluscplusm
Copy link

Here's a possibly-relevant R2-related data point, leading me to suspect Cloudflare may have increased customers' permitted concurrency since this thread began.

I've just uploaded 400 GB from a very well connected machine (total time: just over an hour) using a 1.58.1 pre-release rclone. With roughly 82,000 files, ranging from multi-GB sizes downwards, I used --transfers 10 because I was impatient, and hit this issue on literally 1 file. I didn't use any workarounds; didn't configure --s3-upload-concurrency or --s3-chunk-size, and had to use type:s3,provider:Other as the rclone version I was using hadn't learned about the Cloudflare provider yet.

@ncw
Copy link
Member

ncw commented Nov 3, 2022

@jpluscplusm good data point thanks.

@addshore
Copy link

addshore commented Aug 24, 2023

I also experienced something like this while trying --transfers=32 and --s3-upload-concurrency=32 uploading 1GB chunks of a 1.2TB file to R2 in the past few days
Using 2 or 4 seemed to work fine though (using 1GB or 2GB chunks)

The transfer with 32 was achieving 120MB/s for me
A transfer with 4 was achieving ~80MB/s
So I'll try 6 on my next upload which should fill my pipe anyway and hopefully not error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants