Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to install tiny-cuda-nn as PyTorch extension for multiple compute capabilities? #341

Closed
amitabe opened this issue Jul 20, 2023 · 1 comment

Comments

@amitabe
Copy link

amitabe commented Jul 20, 2023

When running on a specific GPU, I can install tiny-cuda-nn in my conda environment. But when switching to another GPU that has a different compute capability (defined by Nvidia), I get the following error:

----> 1 import tinycudann as tcnn

File ~/micromamba/envs/20/lib/python3.10/site-packages/tinycudann/__init__.py:9
      1 # Copyright (c) 2020-2021, NVIDIA CORPORATION. All rights reserved.
      2 #
      3 # NVIDIA CORPORATION and its licensors retain all intellectual property
   (...)
      6 # distribution of this software and related documentation without an express
      7 # license agreement from NVIDIA CORPORATION is strictly prohibited.
----> 9 from tinycudann.modules import free_temporary_memory, NetworkWithInputEncoding, Network, Encoding
     11 __all__ = ["free_temporary_memory", "NetworkWithInputEncoding", "Network", "Encoding"]

File ~/micromamba/envs/20/lib/python3.10/site-packages/tinycudann/modules.py:59
     56                 pass
     58 if _C is None:
---> 59         raise EnvironmentError(f"Could not find compatible tinycudann extension for compute capability {system_compute_capability}.")
     61 # Pipe tcnn warnings and errors into Python
     62 # def _log(severity, msg):
     63 #       if severity == _C.LogSeverity.Warning:
   (...)
     67 
     68 # _C.set_log_callback(_log)
     69 def _torch_precision(tcnn_precision):

OSError: Could not find compatible tinycudann extension for compute capability 70.

I tried to install it again on this GPU, but I'm keep getting this error.

This error is solved only when uninstalling the package and re-installing it on the new GPU (the one with compute capability of 70).

So my question is - how can I install this package such that it will support multiple compute capabilities? I'm working with different GPUs with different compute capabilities and want to run the same code on all of them.

Thanks

@amitabe
Copy link
Author

amitabe commented Jul 26, 2023

I think I found a solution to this issue, following this answer. So if you want to install this package for compute capabilites 70, 75, 80, 86 you can just run the following:

export CUDA_ARCHITECTURES="70;75;80;86"
export CMAKE_CUDA_ARCHITECTURES=${CUDA_ARCHITECTURES}
export TCNN_CUDA_ARCHITECTURES=${CUDA_ARCHITECTURES}
export TORCH_CUDA_ARCH_LIST="7.0 7.5 8.0 8.6"
export FORCE_CUDA="1"

pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

It takes much longer (since it compiles everything 4 times) so I recommend installing with verbosity (i.e. pip install -v) to see the installation progress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant