-
-
Notifications
You must be signed in to change notification settings - Fork 731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disk write throughput with --progress #7312
Comments
You can use |
Just a notice: You are using outdated borg version. |
I quite disagree. I am thinking more along the lines of the progress output of Funny enough, even the
Understood. Apparently there is a portability issue with 1.18 (see the last paragraph) that prevents it from being updated in EPEL8. |
|
@infectormp I know very well what |
@brianjmurrell There is an optional flag for borg create to view the progress. The following is an excerpt from the docs.
Now I understand this doesn't cover the whole ask of this ticket. Basically, the ask is that borg displays the rate at which these 3 categories (O, C, and D) are being processed - live and after the operation? |
Without being able to see the actual output of [1] I've switched from Borg to using VDO to get the benefits of compression and de-duplication but with a native file system UX. The lack of (performant) native file system UX was a show-stopper for me with Borg. I frequently want to query a file through history. Like looking for a command in a VDO seems to be achieving compression and de-duplication on par with Borg. Here's a comparison of VDO and Borg for my oldest (and certainly not the most de-duplicatable, given the spans of time -- yearly -- between them) backups:
|
@ThomasWaldmann How should we address this? |
There were quite some changes to the compressed size related stats in master (because csize needed to be removed from chunk list entries), so I guess this means:
Throughput shown for:
So, not sure if this is that useful after all. |
Frankly both throughputs would be interesting and fun even. I.e. What was the actual physical throughput of bytes actually written to disk, so that I could get an idea of how much faster things could go if I bought faster disk. But also equally interesting (and fun even!), would be "what is the effective bandwidth of the disk if we were writing all of those duplicate blocks out in full, and uncompressed". I.e. how much did I speed my disk up by de-duplicating and compressing? Put another way "how fast of a disk would I need to match the savings that de-duplicating and compressing are giving me"? |
There was actually an experimental branch once which would exactly tell you these things: whether reading input files, compression, encryption or writing output files was the bottleneck and if so by how much etc. but it turned out that this incurred a rather significant overhead and significantly reduced performance for small files. |
Have you checked borgbackup docs, FAQ, and open GitHub issues?
Yes
Is this a BUG / ISSUE report or a QUESTION?
RFE
System information. For client/server mode post info for both machines.
Your borg version (borg -V).
1.1.17
Operating system (distribution) and version.
AlmaLinux 8.7
Hardware / network configuration, and filesystems used.
Local ext4 filesystem.
How much data is handled by borg?
N/A
Full borg commandline that lead to the problem (leave away excludes and passwords)
N/A
Describe the problem you're observing.
It would be nice when evaluating compression algorithms for example, to know how much disk throughput borg is achieving, if, say one wanted to tune compression to about the same speed as the disk.
But even compression evaluation aside, it's still nice to see on a backup progress report what kind of throughput is being seen.
Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.
N/A
Include any warning/errors/backtraces from the system logs
N/A
The text was updated successfully, but these errors were encountered: