-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify that cuda provides cublas #19269
Conversation
I'm not super familiar with how the spack providers work, but
Edit: maybe it should be |
@Rombur may want to review There's no need for a virtual provider if no other package provides |
I am not sure I understand what's the point of doing this? Are there cases where you want to have a dependency on
It cannot. The name of the functions are different (they are preceded by |
Thank you for your PR! 🎉 We can document the bundled dependencies in CUDA, but I am not 100% sure we will be up to maintain this sufficiently. Here is a list of what is bundled in the CUDA Toolkit today: CUDA is going to release at improved speed over the next years: with rolling ~bi-monthly (?) NVIDIA HPC Software Development Kit (SDK) releases that bundle all software. There are some open source libraries like thrust, cub and libcu++ among others in that stack that are open source and contributable and might be worth shipping extra - but I am not sure if we want to mirror all them here and do book keeping for each cuda release. Unfortunately, I am also not 100% sure what feature we could address from exposing cuBLAS separately - if something is bugged or needs an other version there will soon be plenty of NVIDIA HPC Software Development Kit (SDK) releases we can move to. As you already indicated, my personal impression is also to maybe restrain from adding this specific book keeping feature. |
The motivation for me wasn't really a technical one, it was more that I think it would help clarify the purpose and of a dependency inside a package specification. If you have, a package that can be built with some optional cuBLAS support right now you would just have But you're right that on the 'technical' side this is a non-issue, as (with what I proposed here) a dependency on Also, I think that these kind of questions will probably end up coming up more as Spack keeps growing, so finding a way to handle toolkit packages which bundle together multiple bits of software internally would be nice.
No problem! I wasn't very sure that this was a good idea to begin with 😛
Yeah displaying all of the bundled dependencies would get pretty messy pretty quickly, I also don't think that's the best approach, but on the other hand I think something would be done for this as well. |
Thanks, so in the light of the discussed above: should we go with |
I wouldn't bother with
|
Oh, I am afraid to report that all packaged versions in CUDA 11.0+ are rather random in minor and patch level... But I also I think that 1. alone is good enough for now, mainly because it will be a lot of work to document all these and I think in practice people will document and depend dominantly on a bundled CUDA release when communicating their requirements. Let us also evaluate this in light of #19365. If we have new volunteers to maintain and copy the above table for each release I am 👍 Maybe in a separate file? Let's do all or nothing, I would say. |
@RobertRosca Revisiting this PR as I work on #29155, I think this is a good idea to track the cublas version numbers, but prolly not as a virtual package since that's kinda overkill IMHO. I'm thinking something along the line of a dict in the package.py file. Permission to adapt this PR there? |
Sure @wyphan feel free to take over! Thanks 👍 Would you like me to close this PR so that you can work on it in your issue? |
No need to close it yet. I'll let you know once the PR I'm currently working on #29155 gets merged. |
I think we decided in community PRs that it is not worth to turn each CUDA library into a virtual (discussion on cudatoolkit vs. nvhpc), so closing this. Feel free to reopen if you think otherwise. |
As mentioned on the nvidia documentation page:
Since CUDA provides cuBLAS, I think it might be worth explicitly adding this to the package file so it could be used as a virtual dependency. The versions of cublas provided by cuda are taken from the 'Release Notes' section of the archived documentation here: https://docs.nvidia.com/cuda/archive/
To be honest I'm not sure if this PR is 'correct'. The documentation mentions virtual dependencies in the context of multiple packages providing the same dependency (e.g. MPI), but in this case it's one package providing multiple dependencies (since CUDA contains a lot of individual components).
Actually this is a broader question: in situations like this, where the package you're defining is a toolkit comprising a lot of individual components, should you try to list all of the contained tools as providers?