You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the current state of Sionna (since version 0.14 and Mitsuba introduction in fact), it is not possible to use Sionna on AMD GPU or Tensorflow-Metal (GPU-like for Mac M1,2,3,4 chipset).
I assume this is a mistake, but feel free to state otherwise. It would be understandable as we are on NVIDIA repo.
As you can see below, the way the variant of mitsuba is set depends only on if a GPU is seen by Tensorflow (from sionna/rt/__init__.py):
############################################ Configuring Mitsuba variant###########################################importtensorflowastfimportmitsubaasmi# If at least one GPU is detected, the CUDA variant is used.# Otherwise, the LLVM variant is used.# Note: LLVM is required for execution on CPU.# Note: If multiple GPUs are visible, the first one is used.gpus=tf.config.list_physical_devices('GPU')
iflen(gpus) >0:
mi.set_variant('cuda_ad_rgb')
else:
mi.set_variant('llvm_ad_rgb')
If at least one GPU is seen, as would be expected with tensorflow-rocm (AMD) or tensorflow-metal (Mac), mitsuba tries to set the CUDA variant. The CUDA variant is not compatible with non-NVIDIA architecture which result in an ImportError.
This implies that, even if you don't plan to use mitsuba, you can't use Sionna on your non-CUDA GPU.
A fix, could be to check the build info of tensorflow or the device name. But I couldn't find a way, from python, to cleanly check if the GPU support CUDA.
NVIDIA + CUDA:
AMD + ROCM:
Mac M3 + Metal:
Thanks in advance!
The text was updated successfully, but these errors were encountered:
thank you for reporting this issue. It will be fixed in the next version of Sionna.
As a workaround, you could modify the rt/__init__.py such that it loads Mitsuba's LLVM backend.
Remark: please note that tensorflow-metal does currently not support tf.complex64 dtypes (See Troubleshooting).
I agree on the workaround, it is the only way currently.
Regarding TF-metal, yes this is the sad reality. Hopefully this will be supported in a future version. As far I could test, it seems that the operations involving complex64 are swap to the CPU, resulting in inefficient computation.
Hello everyone,
In the current state of Sionna (since version 0.14 and Mitsuba introduction in fact), it is not possible to use Sionna on AMD GPU or Tensorflow-Metal (GPU-like for Mac M1,2,3,4 chipset).
I assume this is a mistake, but feel free to state otherwise. It would be understandable as we are on NVIDIA repo.
As you can see below, the way the variant of mitsuba is set depends only on if a GPU is seen by Tensorflow (from
sionna/rt/__init__.py
):If at least one GPU is seen, as would be expected with tensorflow-rocm (AMD) or tensorflow-metal (Mac), mitsuba tries to set the CUDA variant. The CUDA variant is not compatible with non-NVIDIA architecture which result in an ImportError.
This implies that, even if you don't plan to use mitsuba, you can't use Sionna on your non-CUDA GPU.
A fix, could be to check the build info of tensorflow or the device name. But I couldn't find a way, from python, to cleanly check if the GPU support CUDA.
NVIDIA + CUDA:
AMD + ROCM:
Mac M3 + Metal:
Thanks in advance!
The text was updated successfully, but these errors were encountered: