Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

duplicacy copy to b2 failing #460

Open
daghub opened this Issue Jul 12, 2018 · 5 comments

Comments

Projects
None yet
3 participants
@daghub
Copy link

daghub commented Jul 12, 2018

I am experimenting with using duplicacy for my main backup, but running into some issues. It was working perfectly, but now I can no longer do a copy from my local backup to b2.

I have logged it with a single thread for visibility, and it seems like there are large gaps in the log of 7 minutes or so. I wonder what duplicacy is doing during these. My bandwidth seems ok, at 100/100 Mbits.
log.txt

Linux Mint 17.3 Rosa
duplicacy -d -log -profile localhost:2222 copy -to b2

(log attached)

@daghub

This comment has been minimized.

Copy link
Author

daghub commented Jul 12, 2018

Eventually the copy fails with

Failed to upload the chunk 97893b700dc8514e2394e794e62bcf47808170157f1c49df34bd411f63a6e231: Maximum backoff reached

This has been failing during my nightly run several times. Previously I uploaded all 80000+ chunks without a hitch. The only change I did was increase the selection of files (adding a symlink), causing a bunch of new chunks to be created. Can it be seen in the logs if this is a B2 issue?

@daghub

This comment has been minimized.

Copy link
Author

daghub commented Jul 12, 2018

duplicacy -version

VERSION:
2.1.0

@gilbertchen

This comment has been minimized.

Copy link
Owner

gilbertchen commented Jul 12, 2018

2018-07-12 09:36:13.180 DEBUG BACKBLAZE_UPLOAD URL request 'https://pod-000-1059-09.backblaze.com/b2api/v1/b2_upload_file/2022aa088d9b55ff611e021a/c000_v0001059_t0011' returned an error: Post https://pod-000-1059-09.backblaze.com/b2api/v1/b2_upload_file/2022aa088d9b55ff611e021a/c000_v0001059_t0011: EOF

This is either a network issue or a B2 issue. If it persists, the best option may be to increase the number of tries from 8 to 12:

for i := 0; i < 8; i++ {

@daghub

This comment has been minimized.

Copy link
Author

daghub commented Jul 12, 2018

Thank you for quick response! This is indeed a transient B2 or networking issue, now it is running much better, chewing away at OK speed. Then, after an hour I started getting the EOF post error, and it gave up because of the backoff limit.

I have this set as a cron job, running once every 24 hours. It would be cool to either allow more retries, or even better, exponential backoff up to a certain threshold (for example every 120 s), and then retrying periodically forever. That way the upload would resume once the network/cloud provider issue is sorted. And the cloud backend would not be overwhelmed in case the error was returned because of throttling/overload.

(Not sure how it is implemented at the moment)

Thank you for a really cool product BTW!

@TowerBR

This comment has been minimized.

Copy link

TowerBR commented Aug 12, 2018

I think that a -number-of-retries option or something similar would be useful, and maybe even as part of the global options.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.