-
Notifications
You must be signed in to change notification settings - Fork 54
Weird behavior during rsync and failure #100
Comments
Hello, with hubiC you should try to lower the default What happens here is : once the uploaded size reaches 256MB (segment size default value), SVFS will move the uploaded part to a segment container server-side and create a manifest refering to it before continuing transferring data. When this happens, you will see a noticeable pause in the upload process : that's the time taken for the data to be copied elsewhere. Unfortunately, such operations are slow at the moment, so a timeout is reached before completion. Lowering this value at mount time will help completing these operations quicker. |
That workaround is not really restrictive (on customer side). |
Still it is strange that rsync proceeds at dozens of MB/sec after getting the failure (and apparently copies into nowhere). I have encountered this too. |
It's not better with segment_size=64 :( Here is my fstab line: |
What if you lower it even more ? Does the problem disappear at some point or not at all ? |
I would suggest also testing another approach, i.e. not using segments at all by setting segment_size to 5120 (and not uploading files larger than 5 GB). |
Thank you! |
Well, i'm trying with segment_size to 5120 since this morning, and no failure since. |
Failure after 20h of upload. |
Were your files all < 5 GB ? |
Yes! My backup files are splitted into 1GB files. |
Since last settings, it's really better, still crashing but less often. |
At this point the issue is more related to the current performance of the hubic infrastructure than an issue in svfs I'm afraid. |
How can we get rsync to hard-fail in this situation ? Because the IO error will get rsync to return an exit code but it will continue to spit data to nowhere until there is nothing left to copy. For example, if rsync is used on a script to copy 1T of data and an error occurs after copying 1G, it will not stop and will continue copying 999G (which consume time and IO ressources) to /dev/null, before realising it gone wrong and eventually trying again. Does it come from rsync (who would need a parameter to properly stop) or svfs ? |
One way to get around this would be to implement an "object moved" event polling after a |
The issue actually come from the underlying swift library SVFS uses. In particular, SVFS exposes several options coming from this layer as mount options like It happens the (very) badly named connection timeout of this library is actually a timeout regarding the entire request processing (from connection to sending the request, getting a reply etc) and the other badly named request timeout is actually only used while reading a request body. So for instance passing I'm working at submitting a PR to change timeout names and separating connection from processing. |
Hello, i'm using the last version of svfs (0.8.2 i386) on a debian stable.
I mount a hubic fuse like this
I'm doing a rync like this
The command fails. After a timeout error (debug mode), it continues with a weird upload rate (my bandwidth is limited to 600KB/s).
The text was updated successfully, but these errors were encountered: