Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] Retry and resume feature #543

Open
user-na opened this issue Mar 6, 2021 · 2 comments
Open

[ENH] Retry and resume feature #543

user-na opened this issue Mar 6, 2021 · 2 comments

Comments

@user-na
Copy link
Contributor

user-na commented Mar 6, 2021

Is your feature request related to a problem? Please describe.

Im try to backup a very big network folder with lots of files and subfolders. I was not able to get it work due to sporadically occuring errors which are probably network related (mostly GetFileAttributes - An unexpected network error occurred). As I was not able to get it running in one batch I now try do backub the subfolders of subfolders idividually to increase the chance of not having this error.

Describe the solution you'd like

Files access errors get catched and rdiff retries the operation. The number of retries can be configured. Also the feature request #179 would help very much to improve the current situation.

Describe alternatives you've considered

Running rdiff on sub-subfolders. This is not very elegant and very error prone.

@ericzolf
Copy link
Member

ericzolf commented Mar 6, 2021

Valid but difficult request, don't hold your breath. We're currently in a lengthy code restructuration phase and adding such new features will have to wait until we're finished.

@rstarkov
Copy link
Contributor

I have my entire script in a retry loop for similar reasons. In my case the cause of failure is usually the network between the source and the target, which means the connection might need to be re-established (the sync takes 12+ hours due to size, and the network is not reliable).

It would be amazing to have rdiff-backup re-establish the connection and carry on. It would be extra amazing if it could pick up the syncing of a large file where it left off as some of my files are 40+GB VM images - but this sounds complicated.

Just thought I'd mention both of these points in case someone picks this task up in future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants