-
-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't download any artifacts if JULIA_MPI_BINARY=system #483
Comments
I guess we could make them lazy, and download the appropriate ones at package build time? |
There are some new features in 1.6 for handling lazy artifacts, i haven't looked into it though. If someone wants to try it, i'd certainly be interested. |
MKL_jll seems to be lazy, see discussion over at JuliaLang/Pkg.jl#2664. However, don't know what that laziness means exactly. Does it also allow optional download without an Override.toml? |
We don't support Overrrides.toml, as it isn't flexible enough for our needs (e.g. Overrides.toml requires the library version number to be the same, which means we can't support different MPI library versions, let alone different implementations). We should also look into using Preferences.jl instead of our current homebrew approach. I don't have time to tackle this at the moment, but am happy to review PRs. |
From https://pkgdocs.julialang.org/v1/artifacts/#Using-Artifacts
If you want to make use of laziness to prevent downloading artifacts greedily, e.g., for the MPICH and OpenMPI packages, you have to go to the package that defines the artifacts for them, e.g.,
Thus going back to the OP,
I think this here is the right place to discuss whether it's useful and necessary to make the MPI artifacts all |
I am sympathetic with not wanting to download unnecessary binaries, but making them lazy has its own problems, I shuder thinking of a thousand processes hitting the network and racing to download an artifact. |
We should still be able to avoid this by forcing the eager download of the artifact at build time if necessary, right? We don't really want to use the laziness "dynamically" (i.e. at runtime after MPI.jl is built). Instead, can't we just explicitly make sure at build time (in e.g. |
Yes, I think that should work (from my limited experience, though). |
BTW, if no one disagrees with the described approach, I'm willing to test-implement this in a PR (and a corresponding PR to Yggdrasil). Actually, I might try the same strategy over at MKL.jl first. There the artifact is much bigger (~1.5 GB) and, fortunately, is already marked as lazy. (see JuliaLinearAlgebra/MKL.jl#82) |
Certainly, please feel free. We could try to trigger the downloading at Pkg.build time (though i guess upgrade may not trigger that). |
Works nicely for MKL.jl, see my PR here. Though, MKL.jl is simpler and I could avoid |
Nice. Once you're happy with that, any chance you can work your magic here as well? |
Yes, I will give it a shot next week. |
I agree with Valentin that making these artifacts lazy isn't a good idea. Also because loading time of JLL packages with lazy artifacts is larger than for non-lazy packages. I think the solution should be resolving JuliaLang/Pkg.jl#2664, coming up with custom solutions for each package is not useful, nor scalable. BTW, my experience with Pkg is that if you really want something to be implemented, you should do it yourself. |
But we can't use Overrides.toml, because it assumes the same library ABI |
JLLWrappers switching to Preferences fixed that. Now you can provide the path directly without having to specify an directory. |
I think that with |
Jinx. |
@giordano: I agree with the scalable part but it's certainly useful and, in fact, used in practice, see packages like MPI.jl and CUDA.jl. Of course, a general solution is much desired but implementing/changing something in Pkg can be a much more time consuming and complex than implementing a "custom" solution for, say, the top 3 HPC packages (MKL.jl, MPI.jl, CUDA.jl).
I'm not sure I understand what the overall strategy / flow of information would be with this new preference feature. In particular, how would it interact with the Overrides.toml (if I specify the path to the MPI lib by setting a "libmpi_path" preference in MPI.jl, why would I still need Overrides.toml)? Can you elaborate a bit more? Side comment: IMHO, from a users perspective, specifying a |
It is meant as a successor/replacements to
Up to the moment where they forget to set the environment variable. Environment variables are also problematic due the question of caching. E.g. the |
On the other hand, this would integrate nicely with the ubiquitous modules system widely used on HPC systems: You type in, e.g., |
Ah, that makes sense (and explains my confusion) 😄 |
I think for maximum compatibility, we're just going to have to listen to both environment variables and Preferences;
There's a PR to MKL.jl open right now that implements this, but it has one big caveat: if the value of the environment variable is read at compile time, it won't be re-read in the future; so you won't be able to force recompilation by just changing the environment variable. You must change the value through There are alternative things we can do (such as at |
I see the issue but don't think it's such a big caveat. The fact that environment variables might only effect the |
As this is not used by MPI.jl by default, it doesn't make sense to download it. See JuliaParallel/MPI.jl#483
As this is not used by MPI.jl by default, it doesn't make sense to download it. See JuliaParallel/MPI.jl#483
As this is not used by MPI.jl by default, it doesn't make sense to download it. See JuliaParallel/MPI.jl#483
MPI.jl seems to always download the MPICH and OpenMPI artifacts despite having set
JULIA_MPI_BINARY=system
. It is using the correct system MPI eventually but ideally it shouldn’t download the unused artifacts in the first place.I guess / fear that this is a limitation of Pkg and the lack of optional dependencies. But I wanted to voice and archive this point somewhere and thought here would be the right place.
The text was updated successfully, but these errors were encountered: