-
-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uploads fail with broken pipe error #7621
Comments
Hi Got the same error after approx 85mb of transfer (see screen grab). Do you want me to turn on any logging/debugging? Ivan |
At least for multipart uploads (triggered for uploads exceeding 100MB) you can resume the transfer to only upload missing parts. |
Usually logging information will not give much clue to broken pipe failures which indicates a generic networking issue. |
Is this specific to the SYD bucket or do uploads fail regardless of the location of the bucket? |
Hi Also testing on 10.7 laptop to see if its anything to do with the java library issue between 10.6 and 10.7 All machines on different AWS keys just in case they are hitting some kind of login limit (which i doubt) Any other thoughts? |
Just finished testing. Hence it looks like a Cyberduck issue?? Any thoughts? ps tried pressing CONTINUE when the error happens but thats like every few mins - the whole transfer is scheduled to take hours.... |
Replying to [comment:11 ivanhassan]:
Yes, that makes it hard to argue it is not an issue on our end. |
Can you try to set the number of maximum transfers to |
Hi I assume this looks like the multi-part upload issue which would explain the S3 disconnection (especially if they interpreted the multipart upload as DoS?) Shall I try on 2 uploads? Many thanks again Ivan |
Replying to [comment:14 ivanhassan]:
With a slow connection and bad latency we were just congesting the line with 5 open connections per multipart transfer. With the change there is no parallelism for multipart uploads. The Maximum connection exceeded is displayed on our part for transfers waiting for a slot. |
So would you expect a 1Mb link to be able to support 2 transfers (ie 10 connections)? Latency appears to be ok (33ms). Could we build in an autochecker and if the line looks too congested, it reduces connections? Just a thought... Ivan |
Replying to [comment:17 ivanhassan]:
Yes, we plan to do dynamic connection number adjustments based on failures and throughput. |
HI
Longtime user just upgraded to v4.4 for Amazon S3 support, however when I'm attempting to transfer a 1Gb file to Amazon S3 in Sydney (i'm In NZ) and I keep getting broken pipe error messages.Please see screen grab. I have also enabled debug logging.
My other ftp client (3Hub) works ok.
Also sometimes after saying 'continue' the transfer window, the job does not resume. I have the number of transfers set to 1. Increasing to 2 sets off the next job. I have highlighted the transfer and pressed 'resume'
(Also i have seen impossible upload speeds - i.e 700Mbps when my link is 1Mbps)
My system is macpro running 10.6.8 connected by DSL 12Mbps down and 1Mbps up. I also have 3Hub and Expandrive installed.
Any help much appreciated.
Ivan
Attachments
Screen shot 2013-11-19 at 09.05.49.png
(27.9 KiB)Screen shot 2013-11-21 at 09.51.04.png
(77.5 KiB)Screen shot 2013-11-21 at 10.10.33.png
(32.9 KiB)Screen shot 2013-11-22 at 12.45.55.png
(48.8 KiB)The text was updated successfully, but these errors were encountered: