Failure proposed upload exceeds the maximum allowed size #10612
It may just be network hiccups, but Cyberduck v6.9.0 (29768) seems to have a lot of trouble uploading large files to my Wasabi cloud account.
I have been using both Cyberduck and the Duck CLI, and have seen the issue with both.
For example, this morning Duck failed to upload a 290 gb file, after 3 hours of trying. I have a gigabit fiber optic connection, so speed is not the issue.
I tried to enable debugging by following directions on https://trac.cyberduck.io/wiki/help/en/faq#Enabledebuglogging.
When I run a Duck command I get:
C:\Users\Mark>duck -l wasabisys://T5ZHIOV9VGMNBAKU1HQ4@Blaise-Archive/Magni/
So it would seem I can't get Duck to log.
While the file is copying, there are 2 entries in the Cyberduck listing in gray text; 1 in the target folder, and 1 in the bucket root (see attached screen shot). These entries do not display a size. Once an upload completes the gray entry in the bucket root is gone, and the entry in the target folder is black, and displays a size.
When an upload fails, the two gray entries remain. If I attempt the upload 3 times and have 3 failures, I will have 6 gray entries (3 in the root, 3 in the target folder), all with the same name.
I guess the gray entries are temp files. Regrettably, when a failed upload is tried again, Duck does not "pickup where it left off" - it starts from the beginning again.
So, I have to delete the grayed root entry, which will also remove the entry in the target folder. One cannot delete the entry in the target folder.
Unfortunately, Wasabi cloud storage charges accrue for 3 months for any file uploaded, even if it is deleted prior to 3 months. This means that I will be paying for every failed file upload for 3 months of storage - and then also paying for the actual file once successfully uploaded.
I would love to figure out why these uploads fail and correct the issue, so I can stop paying for 3 months of storage for each failed upload file fragment.
I would also like to get logging working for Duck.
Right now another copy attempt is being made for this morning's failure, After that, I'll try a large upload from the Cyberduck GUI to see if that is logging.
The text was updated successfully, but these errors were encountered:
Two things ...
(1) Last night a large file upload to the Wasabi cloud failed. I was working with Wasabi support, and I had turned on Wasabi account logging, Here's what they said:
''I and one of the engineering team member who is working on this case looked into the logs and found that when the upload is almost complete, Cyberduck sends a PUT request of the whole file which in turn is rejected by our API as it is above the threshold of the multi-part upload for a single part. The error generated is given below:
(2) Note that I am currently using a Wasabi trial account to test the feasibility of using their service and Cyberduck for our backup purposes. The trial has a 1 TB storage limit.
This morning I attempted to upload a 122 GiB file via Duck with the -v switch. There was more than 300 GiB of space available. The upload failed. Here is the command line used:
I have attached a ZIP of the log to which output was redirected, Copy-122gb-File.zip
If I understand the Wasabi log correctly, this is a problem Cyberduck has uploading large files (to Wasabi); it is not about available space.
Please let me know what you determine.
Also, why can I not get Duck logging working?
I could be wrong, but my admittedly limited understanding is that the file is uploaded, part by part, but in the end Cyberduck executes a PUT of the entire file.
From the Wasabi log, it seems that the maximum size of a "part" is 5 GiB.
I can say that I have successfully uploaded files as large as 35 GiB, perhaps even 90 GiB. But these files are also larger than the 5 GiB maximum part size, so it seems that Cyberduck does not always POST the entire file - otherwise an upload of any file larger than 5 GiB would fail.
When and why Cyberduck POSTs the entire file at the end of the upload, I cannot say.
FYI, Wasabi tech support suggested I try another 3rd party S3 utility.
I downloaded and installed the AWS CLI, and the same file that Cyberduck could not upload (117.1 GiB) was uploaded by AWS CLI with no problems.
Oh, I also installed the 6.9.3 Cyberduck update. The problem persists.
Any insight as to why Cyberduck is having this issue?
Hi Mark & dkocher,
I am encountering the same issue uploading a 109.58GB file to AWS S3 Bucket configured with Glacier lifecycle (Transition after 0 day) using macOS version of Cyberduck.
Had the same result as observed by Mark -> I could be wrong, but my admittedly limited understanding is that the file is uploaded, part by part, but in the end Cyberduck executes a PUT of the entire file.
Here is the last PUT command executed by Cyberduck before the error message is displayed:
Error message displayed by Cyberduck is the same: Your proposed upload exceeds the maximum allowed size. Please contact your web hosting service provider for assistance.
Would be grateful if Cyberduck team could look in this problem.
Thanks very much.
Replying to [comment:10 MarkBlaise]:
For what it's worth, I successfully uploaded a 76.3 GiB (81,969,331,200 bytes) file this morning.
DonaldDuckie's failed file was 109,582,612,915 bytes = 102.1 GiB.
My last failed file was 118.6 GiB (127,324,166,144 bytes)
Perhaps there is there something that happens when file size crosses a certain threshold, maybe >= 100 GiB?
This must be related to our current default settings use a maximum number of parts of
You can manually increase the size of the segments uploaded using the hidden default
Looks like we have a off-by-one error not honouring the maximum allowed part numbers of
Presumably this request is failing and followed by the fallback to upload the file in a single
Yesterday (21-Feb), dkocher said that I should increase the "part size" from 10 MB to 104857600 (100 MiB). With 10,000 parts, the max object size would then be about 976 GiB - plenty for my anticipated use.
This morning there was additional information about an "off by 1" error regarding the maximum number of parts, and the fallback of PUT-ting the entire object fails.
So should I still try increasing the part size? Or will there be a fixed Cyberduck build soon?
If I should change the part size .... I'm not sure what to do with
-defaults write ch.sudo.cyberduck s3.upload.multipart.size 104857600*
Sorry for bring obtuse, but please explain.
Does that mean to add a line to %AppData%\Cyberduck\default.properties? (running on Windows 8.1)
I note that this string has 3 tokens, without an equals symbol (=):
Is that right?
FYI, I downloaded the snapshot build, v220.127.116.11 (30103)
I was able to successfully upload a 116 GiB file in the GUI client yesterday. Thanks for fixing that issue! ;-)
Note that the Duck CLI failed to upload a 142 GiB file this morning. Is this expected? I understand that the CLI may not have been updated, but I was under the impression that Duck is a CLI to the Cyberduck program - calling into the same libraries used by the GUI.
I looked for, but was not able to find, an update for the CLI, I'm using v6.9.3 (30061)
Please set my expectations.
Replying to [comment:22 dkocher]:
Sorry to be obtuse, but I don't see how to install a Duck CLI snapshot.
The "Cyberduck CLI" link in your post brings me to the Duck installation page. On that page, the "other installation options" link brings me to the Windows Installation section of the Cyberduck Wiki page.
There are 2 links there: one brings me to the chocolatey installation page on chocolatey.org, and the other link (MSI Download) brings me to the dist.duck.sh page, and the latest duck release available there is 18.104.22.168061, which was built on 15-February. That is before this is issue was reported, so that can't be the duck snapshot.
Please explain ...
Replying to [comment:23 MarkBlaise]:
Appologies for the confusion. While the documentation to obtain snapshot builds should be clear for macOS and Linux, we are currently missing an option for Windows. You can obtain the build from
It looks like Cyberduck release v6.9.4 fixed this issue. Thank you!
The latest Duck CLI release version available appears to be 6.9.3, which does not have this issue fixed. There is a Duck snapshot build that has the fix, which I am currently using.
Any idea when a release version of Duck CLI with this fix will be available?
Replying to [comment:26 MarkBlaise]:
Version 6.9.4 for Windows is available on Chocolatey or from (https://dist.duck.sh/).
Thanks for the response ... but this is what I see on the Duck distribution list:
The latest Duck CLI release version available appears to be 22.214.171.124061 from 15-Feb. What am I missing?
Replying to [comment:28 MarkBlaise]:
Looks like you are seeing a cached outdated copy of the page. There should be