-
-
Notifications
You must be signed in to change notification settings - Fork 14.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interaction between the accelerator back-end (CUDA and ROCm) support flags #268919
Comments
For packages that do support building with both Perhaps we ought to treat it like |
Lists sound good! There's a natural designation for the CPU-only builds, and a representation for "enable both backends". RE: This actually leads us back to the |
To be really honest, that seems a bit useless because 99.999999999% of users aren't going to have both AMD and NVIDIA GPUs on their system. |
Hmm I'm not sure I understand the priority thing. What's the use case for having priority flags on build options? Shouldn't the build always execute with the configuration it's given? |
Issue description
At this point we independently expose the global
config.cudaSupport
andconfig.rocmSupport
options. We also have a number of package expressions of the form similar to:There are many variations, both in the signature:
{ config, gpuBackend ? if config.cudaSupport then ... elif ... else ... }: ...
, and in the way incompatible combinations are handled:assert (builtins.elem gpuBackend [ ... ])
instead ofbroken = ...
.We might want to find a more consistent approach
Preliminary proposal
config.xSupport
combinations.{ ..., gpuBackend }: ...
{cuda,rocm}Support
and{with,enable}{Cuda,Rocm}
arguments. Set defaults tonull
. Whenever they aren'tnull
, display a deprecation warning and setgpuBackend
appropriatelyassert
, usebroken
instead. This way it's always the end-user's decision as to what builds to attempt (maybe the users extendpatches
and manage to relax the interaction rules)CC @NixOS/cuda-maintainers @NixOS/rocm-maintainers
The text was updated successfully, but these errors were encountered: