New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zstdmt --adapt is MUCH SLOWER (for me) than pzstd #97
Comments
Can someone else confirm/try this? @bagbag ? |
After some research I still don't get what the difference between But yes, I can confirm this issue. And GitHub knows about it too: facebook/zstd#2200.
But it should still be better for lower bandwith (like 1 Gbit/s) links, as it is limiting factor in this case. I think we should simply add a warning/hint to not use
|
But @xrobau reported only 20-30MB/s with 16 cores, which seems way too low. Maybe he can also do the same test you did on his machine. Also testing compression with zeros isnt a very realistic test. Maybe create a binary file with /dev/urandom output and test with that. (/dev/urandom is too slow to use realtime) |
Fair point. I tested that again with a 8 GB HTML dump from Wikipedia. Well. You should set the default back to zstd-fast (but keep zstd-adapt for future improvements in zstd).
|
Yes i'll change it back then. |
I noticed that the default --compress option is now zstdmt, so I thought I would give that a try.
Connectivity is store1 <--> store2 via bonded 10gb, 9100 byte MTU.
Using zstdmt --adapt, the sender was using 16 cores and zfs recv was reporting 20-30MB/s transfers.
Using pzstd, transfer speeds are reported as 300-400MB/s, with 200-300% cpu usage.
This may be just because the data I'm transferring is not that compressible (vmware vmdks), but I think that an order of magnitude difference is pretty significant, and --adapt is MEANT to handle that sensibly.
OS is Ubuntu 21.04, version reported is
The text was updated successfully, but these errors were encountered: