Join GitHub today
Avoid blocking on CloseHandle #1850
On Windows closing a file involves CloseHandle, which can be quite
This does run a risk of resource exhaustion, but as we only have 20k
My benchmark system went from 21/22s to 11s with this change with both
My surface running rustup release is 44s for rust-docs, and 22s with this branch.
referenced this pull request
May 12, 2019
Thank you :).
I think its worth doing the experiment. It may be a bit awkward as the downloaded is in futures and tar-rs is sync code, but perhaps just slamming them together as a learning test would answer the question about impact, and we can make it nice if it has a big benefit.
From what you've said though, you have a slow package server // slow link to the package server, and you want to do the unpacking while the download progresses, because the download is so slow that you can mask all the local IO overheads during the download period.
I think that will definitely work for you; I don't think it will be a negative for anyone, though it will have an impact on folk that depend on the partial download recovery/retry mechanism today unless care is taken to preserve that.
We don't have any stats on how many folk have download times that are roughly the same or greater than the unpacking time (and thus would benefit as you do with making the unpacking concurrent).
Even if they're the same speed, downloading and extracting files in parallel could almost halve the installation time.
That impedance mismatch can be solved by using the https://docs.rs/futures/0.1.27/futures/stream/struct.Wait.html, which adapts a
Maybe! Lets find out.
I don't think the in-process retry patch has merged yet; so you can ignore that. However partial downloads are saved to a .partial file, and that can be resumed if the download fails by running rustup again. I think that is a useful capability, so a mergable version of what you want to achieve would want to still stream the archive to disk.
In terms of joining the two bits together; a few possilbities:
Back on the design though, one more consideration is validation - right now we make some assumptions about the safety of the archive we unpack based on validating it's hash - and in future GPG signature; mirrors of the packages may make trusting partly downloaded content more complex.
I'm working on sane reporting of the time after the tar is fully read
(thats with 4 threads, defender on).
4 threads, defender off gets this output:
64 threads, defender off:
64 threads, defender on:
-> so running with threads==core-count is much better.
kinnison left a comment
There are a couple of typos, but honestly I don't want to hold things up for those and we can fix them later if we re-touch the code. This looks excellent. Thank you so much for working through all of this with Alex.