I have a project that builds CUDA code into shared library. This shared library is then loaded with ctypes to be used from pure-Python code passing buffers from CuPy arrays.
My pyproject.toml uses the Cmake-based scikit-build-core build system, configured as per the nowadays widespread mechanism of PEP-517.
Ideally, I would like to have something minimal like:
[build-system]
build-backend = "scikit_build_core.build"
requires = [
"cuda-toolkit[cccl,nvcc,cudart]==13.*",
"scikit-build-core",
]
to configure the build-system dependencies and a companion CMakeLists.txt for CMake to handle the build of the shared library.
Unfortunately, this does not work out of the box. The reason is simple: CMake knows nothing about the install layout of the various NVIDIA/CUDA packages withing a virtual environment, thus it fails to find the nvcc compiler located in <venv>/lib/pythonX.Y/site-packages/nvidia/cuZZ/bin/nvcc.
I managed to find a workaround that could become a sensible feature addition. All what would be needed is to install a *.pth file in the virtual environment. This file would look for NVCC and then prepend <bindir> to the process $PATH.
A proof of concept is available here: https://github.com/dalcinl/skbuild-cuda-demo/
Please not this line https://github.com/dalcinl/skbuild-cuda-demo/blob/master/pyproject.toml#L18. In there, I'm adding a local project cuda-activate as an additional build dependency. This project is the one that manages to eventually install a *.pth file.
After cloning the git repo, you can check that everything works smoothly using uv (otherwise, do export PROJECT_ROOT=$PWD first):
$ uv run --group test pytest
If you comment out this line https://github.com/dalcinl/skbuild-cuda-demo/blob/master/pyproject.toml#L18 then things will no longer work, CMake will not find nvcc.
Long story short: I would like to propose an opt-in and backward compatible mechanism (via a new small package) such PEP-517 build backends would find NVIDIA tools in automatically. One possible API for this opt-in mechanism could be
[build-system]
requires = [
"cuda-toolkit[build-system]",
...
]
where the extra build-system (or any name you see fit) would bring in an extra package that install *.pth file in charge of extending $PATH (i.e os.environ["PATH"]).
A prototype for this new package is what I wrote for my demo: https://github.com/dalcinl/skbuild-cuda-demo/tree/master/src/cuda-activate which could be of course renamed, e.g cuda-build-system for consistency with the extras name I proposed avobe.
However, there are a few things that I do not like in the code, like depending on finding nvcc vs possibly other tools. I do not know the exact layout and contents of all the new packages in the cuda-python ecosystem, therefore would need some advice here.
Thoughts?
I have a project that builds CUDA code into shared library. This shared library is then loaded with ctypes to be used from pure-Python code passing buffers from CuPy arrays.
My
pyproject.tomluses the Cmake-basedscikit-build-corebuild system, configured as per the nowadays widespread mechanism of PEP-517.Ideally, I would like to have something minimal like:
to configure the build-system dependencies and a companion
CMakeLists.txtfor CMake to handle the build of the shared library.Unfortunately, this does not work out of the box. The reason is simple:
CMakeknows nothing about the install layout of the various NVIDIA/CUDA packages withing a virtual environment, thus it fails to find thenvcccompiler located in<venv>/lib/pythonX.Y/site-packages/nvidia/cuZZ/bin/nvcc.I managed to find a workaround that could become a sensible feature addition. All what would be needed is to install a
*.pthfile in the virtual environment. This file would look for NVCC and then prepend<bindir>to the process$PATH.A proof of concept is available here: https://github.com/dalcinl/skbuild-cuda-demo/
Please not this line https://github.com/dalcinl/skbuild-cuda-demo/blob/master/pyproject.toml#L18. In there, I'm adding a local project
cuda-activateas an additional build dependency. This project is the one that manages to eventually install a *.pth file.After cloning the git repo, you can check that everything works smoothly using
uv(otherwise, doexport PROJECT_ROOT=$PWDfirst):$ uv run --group test pytestIf you comment out this line https://github.com/dalcinl/skbuild-cuda-demo/blob/master/pyproject.toml#L18 then things will no longer work, CMake will not find nvcc.
Long story short: I would like to propose an opt-in and backward compatible mechanism (via a new small package) such PEP-517 build backends would find NVIDIA tools in automatically. One possible API for this opt-in mechanism could be
where the extra
build-system(or any name you see fit) would bring in an extra package that install *.pth file in charge of extending$PATH(i.eos.environ["PATH"]).A prototype for this new package is what I wrote for my demo: https://github.com/dalcinl/skbuild-cuda-demo/tree/master/src/cuda-activate which could be of course renamed, e.g
cuda-build-systemfor consistency with the extras name I proposed avobe.However, there are a few things that I do not like in the code, like depending on finding
nvccvs possibly other tools. I do not know the exact layout and contents of all the new packages in thecuda-pythonecosystem, therefore would need some advice here.Thoughts?