New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Copy in serial? #34
Comments
I think it really depends on what you copy. My theory is that lots of small files would be faster to copy serially, especially with sync |
We can start by replacing |
@YurySolovyov Yup. We should do some benchmarking on this, but I think |
@sindresorhus I'm just not sure where the bottleneck is, if it is CPU threads, yes this is reasonable, if it is number if file descriptors, then it should be something much higher, like 64 or 128 |
Hence why we need benchmarks. |
@sindresorhus do you have an experience of writing good ones? (I would like to learn if so) |
@YurySolovyov, you could use https://github.com/logicalparadox/matcha for example. |
@YurySolovyov If you want to read some code: I don't fully trust results of Maybe @jamestalmage have had similar concerns when he handcrafted the AVA benchmarks, great work btw, however I would prefer some library, even though it's |
There is https://github.com/bestiejs/benchmark.js too. |
I think the easiest solution here is to just add |
It seems really unlikely to me that copying a bunch of files in parallel would speed anything up. Indeed, I would not be surprised to find it slowed things down. Increasing the number of files read simultaneously seems likely to encourage lots of seek delays for HDD's, and SSD's should be able to saturate their bandwidth regardless of how many files are being copied simultaneously.
It's probably worth running some benchmarks to prove this, but I think you would have seen Operating Systems commands / GUI's doing parallel copies if there were any advantage to it.
I guess technically, you could be copying from multiple slow network mounted drives - in which case parallelization might make sense. But I think that's a very small (possibly non-existent) percentage of users.
The text was updated successfully, but these errors were encountered: