From 5fc709d8813d431599159bef6688778d0565b9bc Mon Sep 17 00:00:00 2001 From: Vyas Ramasubramani Date: Tue, 4 Jun 2024 22:10:59 +0000 Subject: [PATCH 1/6] Add doc --- recipe/README.md | 19 ++++- recipe/doc/end_user_compile_guide.md | 27 ++++++ recipe/doc/end_user_run_guide.md | 121 +++++++++++++++++++++++++++ recipe/doc/maintainer_guide.md | 15 ++++ recipe/doc/recipe_guide.md | 87 +++++++++++++++++++ 5 files changed, 266 insertions(+), 3 deletions(-) create mode 100644 recipe/doc/end_user_compile_guide.md create mode 100644 recipe/doc/end_user_run_guide.md create mode 100644 recipe/doc/maintainer_guide.md create mode 100644 recipe/doc/recipe_guide.md diff --git a/recipe/README.md b/recipe/README.md index c6b67d4..0649849 100644 --- a/recipe/README.md +++ b/recipe/README.md @@ -1,16 +1,29 @@ -# CUDA Metapackage Versioning +# CUDA metapackage + +This metapackage corresponds to installing all packages in a CUDA release. +It is suitable for use by both developers aiming to build CUDA applications and end-users running CUDA. +More information for different classes of users is documented in the documents below: + +- [Guide for End-Users Running CUDA Code](./doc/end_user_run_guide.md) +- [Guide for End-Users Compiling CUDA Code](./doc/end_user_compile_guide.md) +- [Guide for Maintainers of Recipes That Use CUDA](./doc/recipe_guide.md) +- [Guide for Maintainers of CUDA recipes](./doc/maintainer_guide.md) + +## Versioning + +### CUDA Metapackage Versioning The version of a CUDA Toolkit metapackage corresponds to the CUDA release label. For example, the release label of CUDA 12.0 Update 1 is 12.0.1. This does not include the `cuda-version` metapackage which is versioned only by the MAJOR.MINOR of a release label. -# Metapackage dependency versions +### Metapackage dependency versions Installing a metapackage at a specific version should install all dependent packages at the exact version from that CUDA release. -# Metapackage dependencies on cuda-version +### Metapackage dependencies on cuda-version Metapackages do not directly constrain to a specific `cuda-version` as their version is more precise. Dependent packages will still install an appropriate diff --git a/recipe/doc/end_user_compile_guide.md b/recipe/doc/end_user_compile_guide.md new file mode 100644 index 0000000..f90f79c --- /dev/null +++ b/recipe/doc/end_user_compile_guide.md @@ -0,0 +1,27 @@ +# Guide for End-Users Compiling CUDA Code + +This guide is for people who wish to use conda environments to compile CUDA code. +Most of the sections of the guide for [running CUDA code](./end_user_run_guide.md) also apply here. +The main difference for users only compiling libraries is that for compilation neither the CUDA driver nor a GPU are required. +Therefore, for building CUDA code the conda packages alone are sufficient with no additional requirements on the user's system. +There are a few other important points that users compiling CUDA code in conda environments should be aware of. + +## Package Naming Conventions + +If you plan to install and build against CUDA packages, you will need to be aware of how libraries are split into packages. +Packages containing libraries (as opposed to compilers or header-only components) follow specific naming conventions. +Typically library components of the CTK are split into three pieces: the base package, a `*-dev` package, and a `*-static` package. +Using [the cuBLAS library](https://github.com/conda-forge/libcublas-feedstock) as an example, we have three different packages: +The base `libcublas` package, which installs the libcublas.so library and is sufficient for use if you are simply installing other packages that require cuBLAS at runtime. +The `libcublas-dev` package, which installs additional files like cuBLAS headers and CMake files. +This package should be installed if you wish to compile your own code against cuBlas within a conda environment. +The `libcublas-static` package, which installs the static cuBLAS library. +This library should be installed if you wish to compile your own code against a static cuBLAS within a conda environment. +Typically the `*-static` packages will require the `*-dev` packages to be installed in order to provide the necessary packaging (CMake, pkg-config) files to discover the library, but this is not currently enforced by the packages themselves. + +## Development Metapackages + +The above discussion of naming also applies to metapackages. +For instance, the `cuda-libraries` package contains all the runtime libraries, while `cuda-libraries-dev` also includes dependencies on the corresponding `*-dev` packages. +In addition, for the purposes of development there are a few additional key metapackages: +- `cuda-compiler`: All packages required to compile a minimal CUDA program (one that does not require e.g. extra math libraries like cuBLAS or cuSparse). diff --git a/recipe/doc/end_user_run_guide.md b/recipe/doc/end_user_run_guide.md new file mode 100644 index 0000000..ccc3fa5 --- /dev/null +++ b/recipe/doc/end_user_run_guide.md @@ -0,0 +1,121 @@ +# Guide for End-Users Running CUDA Code + +This guide is for people who wish to use conda environments to run CUDA code. + +## Prerequisites + +To run CUDA code, you must have an NVIDIA GPU on your machine and you must install the CUDA drivers. +Note that CUDA drivers _cannot_ be installed with conda and must be installed on your system using an appropriate installation method. +See the [CUDA documentation](https://docs.nvidia.com/cuda/) for instructions on how to install ([Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html), [Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)). + +## Installing CUDA + +### Basic Installation + +The easiest one-step solution to install the full CUDA Toolkit is to install the [cuda metapackage](https://github.com/conda-forge/cuda-feedstock/) with this command: + +``` +conda install -c conda-forge cuda cuda-version=12.4 +``` + +Let's break down this command. +We are requesting the installation of two metapackages here, `cuda` and `cuda-version`. +The `cuda` metapackage pulls in all the components of the CUDA Toolkit (CTK) and is roughly equivalent to installing the CUDA Toolkit with traditional OS package managers like apt or yum on Linux. +Similarly to such package managers, the separate components may also be installed independently. +The [`cuda-version` metapackage](https://github.com/conda-forge/cuda-version-feedstock/blob/main/recipe/README.md) is used to select the version of CUDA to install. +This metapackage is important because individual components of the CTK are typically versioned independently (the current versions may be found in the [release notes](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html). +The `cuda-version` metapackage provides a standard way to install the version of a specific CUDA component corresponding to a given version of the CTK. +This way, you never have to specify a particular version of any package; you just specify the `cuda-version` that you want, then list packages you want installed and conda will take care of finding the right versions for you. +The above command will install all components of CUDA from the latest patch release of CUDA 12.4. + +_Warning: there are contents of OS CUDA Toolkit installs that are not available in the conda-forge packages, including_: +- Driver libraries, such as (but not limited to) + - `libcuda` + - `libnvidia-ml` library +- Older versions of CTK contained these + - Documentation + - Samples (use to be) + - Demo suite (limited subset of samples) +- GDS + - Missing file system components +- fabricmanager +- `libnvidia_nscq` +- Imex +- Nsight-systems + +### Installing Subsets of the CTK + +Rather than installing all of CUDA at once, users may instead install just the packages that they need. +For example, to install just libcublas or libcusparse one may run: +``` +conda install -c conda-forge libcublas cuda-version= +``` +The best way to get a current listing is to run +``` +conda install --dry-run -c conda-forge cuda cuda-version= +``` +Original build order: https://github.com/conda-forge/staged-recipes/issues/21382. + +### Metapackages + +Existing conda documentation: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#conda-installation + +For convenience, a number of additional metapackages are available: +- `cuda-runtime`: All CUDA runtime libraries needed to run a CUDA application +- `cuda-libraries`: All libraries required to run a CUDA application requiring libraries beyond the CUDA runtime (such as the CUDA math libraries) as well as packages needed to perform JIT compilation. +- `cuda-visual-tools`: GUIs for visualizing and profiling such as Nsight Systems and Nsight Compute +- `cuda-command-line-tools`: Command line tools for analyzing and profiling such as cupti, cuda-gdb, and Compute Sanitizer. +- `cuda-tools`: All tools for analyzing and profiling, both GUI (includes cuda-visual-tools) and CLI (includes cuda-command-line-tools) + +### CUDA C++ Core Libraries (CCCL) + +CCCL is a special case among CUDA packages. +Due to 1) being header-only, 2) fast-moving, and 3) independently-evolving, consumers may want a different (newer) version of CCCL than the one corresponding to their CTK version. +Instructions on how to install a suitable CCCL package from conda can be found [in the CCCL README](https://github.com/NVIDIA/cccl/?tab=readme-ov-file#conda). (See [this issue](https://github.com/conda-forge/cuda-cccl-impl-feedstock/issues/2) for more information on the history of these packages). + +## conda-forge vs nvidia channel + +Understanding the difference between the CUDA packages on the conda-forge and nvidia channels requires a bit of history because of how the relationship has evolved over time. +In particular, how these channels may or may not coexist will depend on the versions of CUDA that you need support for. + +### Pre-CUDA 12: + +Prior to CUDA 12, the only package available on conda-forge was the `cudatoolkit` package, a community-maintained, monolithic package containing the entire repackaged CTK. +During the CUDA 11 release cycle, NVIDIA began maintaining a set of CUDA Toolkit packages in the nvidia channel. +Unlike the monolithic conda-forge package, the nvidia channel distributed the CTK split into components such that each library was given its own package. +This package organization made it possible to install separate components independently and better aligned the conda packaging ecosystem with other package managers, such as those for Linux distributions. +However, this organization introduced a number of changes that were at times confusing -- such as the introduction of a `cuda-toolkit` (note the hyphen) metapackage that installs a partially overlapping set of components to the original `cudatoolkit` -- and at other times breaking, particularly in conda environments configured to pull packages from both conda-forge and the nvidia channel. +Therefore, in a CUDA 11 world the conda-forge and nvidia channels were difficult to use in the same environment without some care. + +### CUDA 12.0-12.4 + +With the CUDA 12 release, NVIDIA contributed the new packaging structure to conda-forge, introducing the same set of packages that existed on the nvidia channel as a replacement for the old `cudatoolkit` package on conda-forge.This was done starting with CUDA 12.0 to indicate the breaking nature of these changes compared to the prior CUDA 11.x packaging in conda-forge. +These packages became the standard mechanism for delivering CUDA conda packages. +Due to the scale of the reorganization, the CUDA 12.0, 12.1, and 12.2 releases also involved numerous additional fixes to the packaging structure to better integrate them in the Conda ecosystem. +Due to the number of such changes that were required and the focus on improving the quality of these installations, during this time period no corresponding updates were provided for packages on the nvidia channel. +While the conda-forge and nvidia channel package lists were the same (i.e. the same packages existed in both places with the same core contents like libraries and headers), the nvidia channel did not include many of the incremental fixes made on conda-forge to improve things like symlinks, static library handling, proper package constraints, etc. +As a result, nvidia and conda-forge CUDA packages remained incompatible from CUDA 12.0-12.4. + +### CUDA 12.5+ + +With CUDA 12.5, the nvidia channel was fully aligned with conda-forge. +Packages on both channels are identical, ensuring safe coexistence of the two channels within the same conda environment. + +Going forward, the packages on the two channels should be expected to remain compatible. +However, due to its smaller ecosystem footprint the nvidia channel may be a bit more nimble + +## FAQ + +### What if I see an error saying `__cuda` is too old? + +You will need to update your driver +- [Linux instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#driver-installation) +- [Windows instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#installing-cuda-development-tools) + +If conda has incorrectly identified the CUDA driver, you can override by setting the `CONDA_OVERRIDE_CUDA` environment variable. + +### Can I install CUDA conda packages in a CPU-only environment (such as free-tier CIs)? + +Yes! All of the CUDA packages can be installed in an environment without the presence of a physical GPU or CUDA driver. +The inter-package dependency is established properly so that this use case is covered. +If you want to test package installation assuming a certain driver version is installed, use the `CONDA_OVERRIDE_CUDA` environment variable mentioned above. diff --git a/recipe/doc/maintainer_guide.md b/recipe/doc/maintainer_guide.md new file mode 100644 index 0000000..a79cafb --- /dev/null +++ b/recipe/doc/maintainer_guide.md @@ -0,0 +1,15 @@ +# Guide for Maintainers of CUDA recipes + +This guide is intended for maintainers of the CUDA packages themselves. + +## Rationale for split packages + +In addition to the standardized dev/static division of libraries, some packages are also divided into multiple pieces for more specialized reasons. + +### nvcc split + +The `nvcc` compiler natively supports cross-compilation, i.e. a single host binary can produce binaries compiled for any target platform it supports without requiring a completely separate binutils installation for each target. +However, target-specific headers are still necessary in order to compile suitable code for the given target. +To support this, the `nvcc` compiler is split into a couple of feedstocks, [`cuda-nvcc`](https://github.com/conda-forge/cuda-nvcc-feedstock/) and [`cuda-nvcc-impl`](https://github.com/conda-forge/cuda-nvcc-impl-feedstock/). +These packages split the files such that we can have the compiler package be dependent exclusively on the platform for which it is compiled while the `cuda-nvcc-impl` package is dependent only on the cross-compilation target and includes the required headers (and other files) such that compilation will succeed. +This way, the two packages may be updated or changed in parallel and will interoperate properly in cross-compilation environments. diff --git a/recipe/doc/recipe_guide.md b/recipe/doc/recipe_guide.md new file mode 100644 index 0000000..b562ace --- /dev/null +++ b/recipe/doc/recipe_guide.md @@ -0,0 +1,87 @@ +# Guide for Maintainers of Recipes That Use CUDA + +This guide is intended for maintainers of other recipes that depend on CUDA. +It assumes familiarity with the user guides for both [running CUDA code](./end_user_run_guide.md) and [compiling CUDA code](./end_user_compile_guide.md) with conda-forge packages. + +## Best Practices + +Recipe maintainers are encouraged not to use the metapackages for specifying dependencies, but to instead specify only the minimal subset of CUDA components required for usage of their libraries. +The `*-dev` variants of the packages all have [`run_exports`](https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#export-runtime-requirements) set up such that if your package requires a certain CUDA library at build time, specifying the corresponding dev package as a host requirement will result in your package exporting the corresponding non-dev packages as a runtime dependency. + +## CUDA Enhanced Compatibility + +Since CUDA 11, [CUDA has offered a number of increased compatibility guarantees](https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#cuda-compatibility-developer-s-guide). +The most up to date documentation for these may be found in [the CUDA best practices documentation](https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#cuda-toolkit-versioning). +Of particular interest from a conda packaging perspective is the binary backwards compatibility made by the CTK: packages built with newer versions of the CTK will also run with older minor versions (within the same major family) of the CTK installed, assuming that you have properly guarded any usage of APIs introduced in newer versions with suitable checks. +This guide assumes that you already know how to write and compile code that supports such behavior. +If your library is properly configured as such, you will need to do a bit of extra work to ensure that your conda package supports this as well: + +You must [ignore the run exports](https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#export-runtime-requirements) of any CUDA packages that your package depends on. +Otherwise, the `run_exports` would require the runtime libraries to have versions equal to or greater than the versions used to build the package. +You must add explicit runtime dependencies (or `run_constrained` for soft/optional dependencies) that specify the desired version range for the dependencies whose run exports have been ignored. + +As an example, consider that you have built a package that requires libcublas: + +``` +requirements: + build: + - compiler(‘cuda’) + host: + - libcublas + - cuda-version=12.4 +``` + +By default, at runtime your library will require having the libcublas version corresponding to CUDA 12.4 or newer. +To make this compatible with all CUDA 12 minor versions 12.0+, you must add the following: +``` +build: + # Ignore run exports in your build section + ignore_run_exports_from: + - {{ compiler('cuda') }} + - libcurand-dev + +requirements: + run: + # Since we’ve ignored the run export, pin manually, but set the min to just “x” since we support any libcublas within the same major release, including older versions + - pin_compatible(libcublas, min_pin=”x”, max_pin=”x”) +``` + + +## Cross-compilation + +The CUDA recipes are designed to support cross-compilation. +As such, a number of CUDA components on conda-forge are split into `noarch: generic` component packages that are named according to the supported architecture, rather than being architecture-specific packages. +The canonical example is [the cuda-nvcc package](https://github.com/conda-forge/cuda-nvcc-feedstock/blob/main/recipe/meta.yaml) that contains the CUDA `nvcc` compiler. +This package is split into the `cuda-nvcc` package – which is architecture specific and must be installed on the appropriate target platform (e.g. +x86-64 Linux) – and the `cuda-nvcc-${TARGET_PLATFORM}` packages – each of which is architecture-independent and may be installed on any target, but are only suitable for use in compiling code for the specified target platform. +This approach allows using host machines with a single platform to compile code for multiple platforms. + + +## Directory structure + +### Linux + +The conda-forge CUDA packages aim to satisfy two sets of constraints. +On one hand, the packages aim to retain as similar a structure as possible to the CUDA packages that may be installed via system package manager (e.g. +apt). +On the other hand, the packages aim to provide a seamless experience at both build-time and runtime within conda environments. +To satisfy the first requirement, all files in CUDA conda packages are installed into the `$PREFIX/targets` directory. +This includes libraries, headers, and packaging files, along with other miscellaneous files that may be present in any given package. +To satisfy the second requirement, we symlink a number of these files into standard sysroot locations so that they can be found by standard tooling (e.g. +CMake, compilers, ld, etc). +Specifically, we apply the following conventions: +Shared libraries are symlinked into `$PREFIX/lib`. +This includes the bare name (`libcublas.so`), the SONAME, and the full name. +Pkgconfig files are installed directly into `$PREFIX/lib/pkgconfig`. +These are not symlinked from `$PREFIX/targets`, but are directly moved to this location. +The reason is that pkgconfig files contain relative paths to libraries/headers/etc and the paths cannot be accurate relative to both the `targets` directory and the `lib/pkgconfig` directory. +Since the latter is what `pkgconfig` will use, we choose to install the files into `lib/pkgconfig` and reroot the paths accordingly. +Static libraries and header files are not symlinked into the sysroot directories. +Instead, conda installations of `nvcc` know how to search for these packages in the correct directories. + +### Windows + +Package structure on Windows. +Doesn’t have `x64` directory. + +Library structure, on Windows this would be `%LIBRARY_LIB%` for `.lib` files used during the build `%LIBRARY_BIN%` and `.dll` files used at build time and run time From b80dff0958a7578e4de5bbf5af363a97109e26bd Mon Sep 17 00:00:00 2001 From: Vyas Ramasubramani Date: Fri, 14 Jun 2024 09:18:16 -0700 Subject: [PATCH 2/6] Apply suggestions from code review Co-authored-by: Bradley Dice Co-authored-by: Leo Fang --- recipe/doc/end_user_compile_guide.md | 4 +-- recipe/doc/end_user_run_guide.md | 32 ++++++++++--------- recipe/doc/recipe_guide.md | 46 +++++++++++++--------------- 3 files changed, 41 insertions(+), 41 deletions(-) diff --git a/recipe/doc/end_user_compile_guide.md b/recipe/doc/end_user_compile_guide.md index f90f79c..4ead1dd 100644 --- a/recipe/doc/end_user_compile_guide.md +++ b/recipe/doc/end_user_compile_guide.md @@ -10,9 +10,9 @@ There are a few other important points that users compiling CUDA code in conda e If you plan to install and build against CUDA packages, you will need to be aware of how libraries are split into packages. Packages containing libraries (as opposed to compilers or header-only components) follow specific naming conventions. -Typically library components of the CTK are split into three pieces: the base package, a `*-dev` package, and a `*-static` package. +Typically library components of the CUDA Toolkit (CTK) are split into three pieces: the base package, a `*-dev` package, and a `*-static` package. Using [the cuBLAS library](https://github.com/conda-forge/libcublas-feedstock) as an example, we have three different packages: -The base `libcublas` package, which installs the libcublas.so library and is sufficient for use if you are simply installing other packages that require cuBLAS at runtime. +The base `libcublas` package, which installs the `libcublas.so` library and is sufficient for use if you are simply installing other packages that require cuBLAS at runtime. The `libcublas-dev` package, which installs additional files like cuBLAS headers and CMake files. This package should be installed if you wish to compile your own code against cuBlas within a conda environment. The `libcublas-static` package, which installs the static cuBLAS library. diff --git a/recipe/doc/end_user_run_guide.md b/recipe/doc/end_user_run_guide.md index ccc3fa5..aa227f5 100644 --- a/recipe/doc/end_user_run_guide.md +++ b/recipe/doc/end_user_run_guide.md @@ -8,25 +8,27 @@ To run CUDA code, you must have an NVIDIA GPU on your machine and you must insta Note that CUDA drivers _cannot_ be installed with conda and must be installed on your system using an appropriate installation method. See the [CUDA documentation](https://docs.nvidia.com/cuda/) for instructions on how to install ([Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html), [Windows](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html)). +Note that the CUDA packages are designed to be installable in a *CPU-only* environment (i.e. no CUDA driver or GPU installed), see the FAQ at the end of this page. + ## Installing CUDA ### Basic Installation -The easiest one-step solution to install the full CUDA Toolkit is to install the [cuda metapackage](https://github.com/conda-forge/cuda-feedstock/) with this command: +The easiest one-step solution to install the full CUDA Toolkit is to install the [`cuda` metapackage](https://github.com/conda-forge/cuda-feedstock/) with this command: ``` -conda install -c conda-forge cuda cuda-version=12.4 +conda install -c conda-forge cuda cuda-version=12.5 ``` Let's break down this command. We are requesting the installation of two metapackages here, `cuda` and `cuda-version`. -The `cuda` metapackage pulls in all the components of the CUDA Toolkit (CTK) and is roughly equivalent to installing the CUDA Toolkit with traditional OS package managers like apt or yum on Linux. +The `cuda` metapackage pulls in all the components of the CUDA Toolkit (CTK) and is roughly equivalent to installing the CUDA Toolkit with traditional system package managers like `apt` or `yum` on Linux. Similarly to such package managers, the separate components may also be installed independently. The [`cuda-version` metapackage](https://github.com/conda-forge/cuda-version-feedstock/blob/main/recipe/README.md) is used to select the version of CUDA to install. This metapackage is important because individual components of the CTK are typically versioned independently (the current versions may be found in the [release notes](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html). The `cuda-version` metapackage provides a standard way to install the version of a specific CUDA component corresponding to a given version of the CTK. This way, you never have to specify a particular version of any package; you just specify the `cuda-version` that you want, then list packages you want installed and conda will take care of finding the right versions for you. -The above command will install all components of CUDA from the latest patch release of CUDA 12.4. +The above command will install all components of CUDA from the latest minor release of CUDA 12.5. _Warning: there are contents of OS CUDA Toolkit installs that are not available in the conda-forge packages, including_: - Driver libraries, such as (but not limited to) @@ -36,19 +38,19 @@ _Warning: there are contents of OS CUDA Toolkit installs that are not available - Documentation - Samples (use to be) - Demo suite (limited subset of samples) -- GDS +- GPUDirect Storage (GDS) - Missing file system components - fabricmanager - `libnvidia_nscq` -- Imex -- Nsight-systems +- IMEX +- Nsight Systems -### Installing Subsets of the CTK +### Installing Subsets of the CUDA Toolkit Rather than installing all of CUDA at once, users may instead install just the packages that they need. -For example, to install just libcublas or libcusparse one may run: +For example, to install just `libcublas` and `libcusparse` one may run: ``` -conda install -c conda-forge libcublas cuda-version= +conda install -c conda-forge libcublas libcusparse cuda-version= ``` The best way to get a current listing is to run ``` @@ -89,7 +91,7 @@ Therefore, in a CUDA 11 world the conda-forge and nvidia channels were difficult ### CUDA 12.0-12.4 -With the CUDA 12 release, NVIDIA contributed the new packaging structure to conda-forge, introducing the same set of packages that existed on the nvidia channel as a replacement for the old `cudatoolkit` package on conda-forge.This was done starting with CUDA 12.0 to indicate the breaking nature of these changes compared to the prior CUDA 11.x packaging in conda-forge. +With the CUDA 12 release, NVIDIA contributed the new packaging structure to conda-forge, introducing the same set of packages that existed on the nvidia channel as a replacement for the old `cudatoolkit` package on conda-forge. This was done starting with CUDA 12.0 to indicate the breaking nature of these changes compared to the prior CUDA 11.x packaging in conda-forge. These packages became the standard mechanism for delivering CUDA conda packages. Due to the scale of the reorganization, the CUDA 12.0, 12.1, and 12.2 releases also involved numerous additional fixes to the packaging structure to better integrate them in the Conda ecosystem. Due to the number of such changes that were required and the focus on improving the quality of these installations, during this time period no corresponding updates were provided for packages on the nvidia channel. @@ -102,20 +104,22 @@ With CUDA 12.5, the nvidia channel was fully aligned with conda-forge. Packages on both channels are identical, ensuring safe coexistence of the two channels within the same conda environment. Going forward, the packages on the two channels should be expected to remain compatible. -However, due to its smaller ecosystem footprint the nvidia channel may be a bit more nimble ## FAQ ### What if I see an error saying `__cuda` is too old? -You will need to update your driver +The `__cuda` virtual package is used by `conda` to represent the maximum CUDA version fully supported by the display driver. See the [conda docs on virtual packages](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-virtual.html) for more information. + +To update the `__cuda` virtual package, you must install a newer driver: - [Linux instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#driver-installation) - [Windows instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#installing-cuda-development-tools) -If conda has incorrectly identified the CUDA driver, you can override by setting the `CONDA_OVERRIDE_CUDA` environment variable. +If conda has incorrectly identified the CUDA driver, you can override by setting the `CONDA_OVERRIDE_CUDA` environment variable to a version number like `"12.5"` or `""` to indicate that no CUDA driver is detected. ### Can I install CUDA conda packages in a CPU-only environment (such as free-tier CIs)? Yes! All of the CUDA packages can be installed in an environment without the presence of a physical GPU or CUDA driver. The inter-package dependency is established properly so that this use case is covered. If you want to test package installation assuming a certain driver version is installed, use the `CONDA_OVERRIDE_CUDA` environment variable mentioned above. +Even if the package requires CUDA to run, this allows the packaging and dependency resolution to be tested in a CPU-only environment. diff --git a/recipe/doc/recipe_guide.md b/recipe/doc/recipe_guide.md index b562ace..00200ae 100644 --- a/recipe/doc/recipe_guide.md +++ b/recipe/doc/recipe_guide.md @@ -6,46 +6,48 @@ It assumes familiarity with the user guides for both [running CUDA code](./end_u ## Best Practices Recipe maintainers are encouraged not to use the metapackages for specifying dependencies, but to instead specify only the minimal subset of CUDA components required for usage of their libraries. -The `*-dev` variants of the packages all have [`run_exports`](https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#export-runtime-requirements) set up such that if your package requires a certain CUDA library at build time, specifying the corresponding dev package as a host requirement will result in your package exporting the corresponding non-dev packages as a runtime dependency. +The `*-dev` variants of the packages all have [`run_exports`](https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#export-runtime-requirements) set up such that if your package requires a certain CUDA library at run time, specifying the corresponding `dev` package as a host requirement will result in your package exporting the corresponding non-dev package as a runtime dependency. ## CUDA Enhanced Compatibility Since CUDA 11, [CUDA has offered a number of increased compatibility guarantees](https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#cuda-compatibility-developer-s-guide). The most up to date documentation for these may be found in [the CUDA best practices documentation](https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#cuda-toolkit-versioning). -Of particular interest from a conda packaging perspective is the binary backwards compatibility made by the CTK: packages built with newer versions of the CTK will also run with older minor versions (within the same major family) of the CTK installed, assuming that you have properly guarded any usage of APIs introduced in newer versions with suitable checks. +Of particular interest from a conda packaging perspective is the binary backwards compatibility made by the CUDA Toolkit (CTK): packages built with newer versions of the CTK will also run with older minor versions (within the same major family) of the CTK installed, assuming that your package has properly guarded any usage of APIs introduced in newer versions with suitable checks. This guide assumes that you already know how to write and compile code that supports such behavior. If your library is properly configured as such, you will need to do a bit of extra work to ensure that your conda package supports this as well: -You must [ignore the run exports](https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#export-runtime-requirements) of any CUDA packages that your package depends on. -Otherwise, the `run_exports` would require the runtime libraries to have versions equal to or greater than the versions used to build the package. -You must add explicit runtime dependencies (or `run_constrained` for soft/optional dependencies) that specify the desired version range for the dependencies whose run exports have been ignored. +- You must [ignore the `run_exports`](https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#export-runtime-requirements) of any CUDA packages that your package depends on. Otherwise, the `run_exports` would require the runtime libraries to have versions equal to or greater than the versions used to build the package. +- You must add explicit runtime dependencies (or `run_constrained` for soft/optional dependencies) that specify the desired version range for the dependencies whose `run_exports` have been ignored. -As an example, consider that you have built a package that requires libcublas: +As an example, consider that you have built a package that requires `libcublas`: -``` + +```yaml requirements: build: - - compiler(‘cuda’) + - compiler('cuda') host: - libcublas - cuda-version=12.4 ``` -By default, at runtime your library will require having the libcublas version corresponding to CUDA 12.4 or newer. +By default, at runtime your library will require having the `libcublas` version corresponding to CUDA 12.4 or newer. To make this compatible with all CUDA 12 minor versions 12.0+, you must add the following: -``` +```yaml build: # Ignore run exports in your build section ignore_run_exports_from: - {{ compiler('cuda') }} - - libcurand-dev + - libcublas-dev requirements: run: - # Since we’ve ignored the run export, pin manually, but set the min to just “x” since we support any libcublas within the same major release, including older versions - - pin_compatible(libcublas, min_pin=”x”, max_pin=”x”) + # Since we’ve ignored the run export, pin manually, but set the min to just "x" since we support any libcublas within the same major release, including older versions + - pin_compatible("libcublas", min_pin="x", max_pin="x") ``` +For packages that need to support both CUDA major versions 11 & 12, you will need to use selectors and/or Jinja tricks to separate out the requirements for CUDA 11 and CUDA 12. [cupy-feedstock](https://github.com/conda-forge/cupy-feedstock) offers a good example. + ## Cross-compilation @@ -53,7 +55,7 @@ The CUDA recipes are designed to support cross-compilation. As such, a number of CUDA components on conda-forge are split into `noarch: generic` component packages that are named according to the supported architecture, rather than being architecture-specific packages. The canonical example is [the cuda-nvcc package](https://github.com/conda-forge/cuda-nvcc-feedstock/blob/main/recipe/meta.yaml) that contains the CUDA `nvcc` compiler. This package is split into the `cuda-nvcc` package – which is architecture specific and must be installed on the appropriate target platform (e.g. -x86-64 Linux) – and the `cuda-nvcc-${TARGET_PLATFORM}` packages – each of which is architecture-independent and may be installed on any target, but are only suitable for use in compiling code for the specified target platform. +x86-64 Linux) – and the `cuda-nvcc_${TARGET_PLATFORM}` packages – each of which is architecture-independent and may be installed on any target, but are only suitable for use in compiling code for the specified target platform. This approach allows using host machines with a single platform to compile code for multiple platforms. @@ -62,22 +64,16 @@ This approach allows using host machines with a single platform to compile code ### Linux The conda-forge CUDA packages aim to satisfy two sets of constraints. -On one hand, the packages aim to retain as similar a structure as possible to the CUDA packages that may be installed via system package manager (e.g. -apt). -On the other hand, the packages aim to provide a seamless experience at both build-time and runtime within conda environments. +On one hand, the packages aim to retain as similar a structure as possible to the CUDA packages that may be installed via system package manager (e.g. `apt` and `yum`) while supporting cross-compilation. +On the other hand, the packages aim to provide a seamless experience at both build time and run time within conda environments. To satisfy the first requirement, all files in CUDA conda packages are installed into the `$PREFIX/targets` directory. This includes libraries, headers, and packaging files, along with other miscellaneous files that may be present in any given package. To satisfy the second requirement, we symlink a number of these files into standard sysroot locations so that they can be found by standard tooling (e.g. CMake, compilers, ld, etc). Specifically, we apply the following conventions: -Shared libraries are symlinked into `$PREFIX/lib`. -This includes the bare name (`libcublas.so`), the SONAME, and the full name. -Pkgconfig files are installed directly into `$PREFIX/lib/pkgconfig`. -These are not symlinked from `$PREFIX/targets`, but are directly moved to this location. -The reason is that pkgconfig files contain relative paths to libraries/headers/etc and the paths cannot be accurate relative to both the `targets` directory and the `lib/pkgconfig` directory. -Since the latter is what `pkgconfig` will use, we choose to install the files into `lib/pkgconfig` and reroot the paths accordingly. -Static libraries and header files are not symlinked into the sysroot directories. -Instead, conda installations of `nvcc` know how to search for these packages in the correct directories. +- Shared libraries are symlinked into `$PREFIX/lib`. This includes the bare name (`libcublas.so`), the SONAME, and the full name. +- Pkgconfig files are installed directly into `$PREFIX/lib/pkgconfig`. These are not symlinked from `$PREFIX/targets`, but are directly moved to this location. The reason is that pkgconfig files contain relative paths to libraries/headers/etc and the paths cannot be accurate relative to both the `targets` directory and the `lib/pkgconfig` directory. Since the latter is what `pkgconfig` will use, we choose to install the files into `lib/pkgconfig` and reroot the paths accordingly. +- Static libraries and header files are not symlinked into the sysroot directories. Instead, conda installations of `nvcc` know how to search for these packages in the correct directories. ### Windows From 3e92600ff6671c85a194f7075a719d32cb892f0d Mon Sep 17 00:00:00 2001 From: Vyas Ramasubramani Date: Fri, 14 Jun 2024 17:59:35 +0000 Subject: [PATCH 3/6] Address PR feedback --- recipe/doc/end_user_run_guide.md | 32 ++++++++++++++++---------------- recipe/doc/recipe_guide.md | 13 +++---------- 2 files changed, 19 insertions(+), 26 deletions(-) diff --git a/recipe/doc/end_user_run_guide.md b/recipe/doc/end_user_run_guide.md index aa227f5..1406f49 100644 --- a/recipe/doc/end_user_run_guide.md +++ b/recipe/doc/end_user_run_guide.md @@ -30,14 +30,13 @@ The `cuda-version` metapackage provides a standard way to install the version of This way, you never have to specify a particular version of any package; you just specify the `cuda-version` that you want, then list packages you want installed and conda will take care of finding the right versions for you. The above command will install all components of CUDA from the latest minor release of CUDA 12.5. -_Warning: there are contents of OS CUDA Toolkit installs that are not available in the conda-forge packages, including_: +_Warning: there are contents of OS CUDA Toolkit installs that are not available in the `conda-forge` packages, including_: - Driver libraries, such as (but not limited to) - `libcuda` - `libnvidia-ml` library - Older versions of CTK contained these - Documentation - Samples (use to be) - - Demo suite (limited subset of samples) - GPUDirect Storage (GDS) - Missing file system components - fabricmanager @@ -56,7 +55,7 @@ The best way to get a current listing is to run ``` conda install --dry-run -c conda-forge cuda cuda-version= ``` -Original build order: https://github.com/conda-forge/staged-recipes/issues/21382. +For a complete listing of the packages that were originally created, see [this issue](https://github.com/conda-forge/staged-recipes/issues/21382) ### Metapackages @@ -65,7 +64,7 @@ Existing conda documentation: https://docs.nvidia.com/cuda/cuda-installation-gui For convenience, a number of additional metapackages are available: - `cuda-runtime`: All CUDA runtime libraries needed to run a CUDA application - `cuda-libraries`: All libraries required to run a CUDA application requiring libraries beyond the CUDA runtime (such as the CUDA math libraries) as well as packages needed to perform JIT compilation. -- `cuda-visual-tools`: GUIs for visualizing and profiling such as Nsight Systems and Nsight Compute +- `cuda-visual-tools`: GUIs for visualizing and profiling such as Nsight Compute - `cuda-command-line-tools`: Command line tools for analyzing and profiling such as cupti, cuda-gdb, and Compute Sanitizer. - `cuda-tools`: All tools for analyzing and profiling, both GUI (includes cuda-visual-tools) and CLI (includes cuda-command-line-tools) @@ -75,32 +74,33 @@ CCCL is a special case among CUDA packages. Due to 1) being header-only, 2) fast-moving, and 3) independently-evolving, consumers may want a different (newer) version of CCCL than the one corresponding to their CTK version. Instructions on how to install a suitable CCCL package from conda can be found [in the CCCL README](https://github.com/NVIDIA/cccl/?tab=readme-ov-file#conda). (See [this issue](https://github.com/conda-forge/cuda-cccl-impl-feedstock/issues/2) for more information on the history of these packages). -## conda-forge vs nvidia channel +## `conda-forge` vs `nvidia` channel -Understanding the difference between the CUDA packages on the conda-forge and nvidia channels requires a bit of history because of how the relationship has evolved over time. +Understanding the difference between the CUDA packages on the `conda-forge` and `nvidia` channels requires a bit of history because of how the relationship has evolved over time. In particular, how these channels may or may not coexist will depend on the versions of CUDA that you need support for. ### Pre-CUDA 12: -Prior to CUDA 12, the only package available on conda-forge was the `cudatoolkit` package, a community-maintained, monolithic package containing the entire repackaged CTK. -During the CUDA 11 release cycle, NVIDIA began maintaining a set of CUDA Toolkit packages in the nvidia channel. -Unlike the monolithic conda-forge package, the nvidia channel distributed the CTK split into components such that each library was given its own package. +Prior to CUDA 12, the only package available on `conda-forge` was the `cudatoolkit` package, a community-maintained, monolithic package containing the entire repackaged CTK. +During the CUDA 11 release cycle, NVIDIA began maintaining a set of CUDA Toolkit packages in the `nvidia` channel. +Unlike the monolithic `conda-forge` package, the `nvidia` channel distributed the CTK split into components such that each library was given its own package. This package organization made it possible to install separate components independently and better aligned the conda packaging ecosystem with other package managers, such as those for Linux distributions. -However, this organization introduced a number of changes that were at times confusing -- such as the introduction of a `cuda-toolkit` (note the hyphen) metapackage that installs a partially overlapping set of components to the original `cudatoolkit` -- and at other times breaking, particularly in conda environments configured to pull packages from both conda-forge and the nvidia channel. -Therefore, in a CUDA 11 world the conda-forge and nvidia channels were difficult to use in the same environment without some care. +However, this organization introduced a number of changes that were at times confusing -- such as the introduction of a `cuda-toolkit` (note the hyphen) metapackage that installs a partially overlapping set of components to the original `cudatoolkit` -- and at other times breaking, particularly in conda environments configured to pull packages from both `conda-forge` and the `nvidia` channel. +Therefore, in a CUDA 11 world the `conda-forge` and `nvidia` channels were difficult to use in the same environment without some care. ### CUDA 12.0-12.4 -With the CUDA 12 release, NVIDIA contributed the new packaging structure to conda-forge, introducing the same set of packages that existed on the nvidia channel as a replacement for the old `cudatoolkit` package on conda-forge. This was done starting with CUDA 12.0 to indicate the breaking nature of these changes compared to the prior CUDA 11.x packaging in conda-forge. +With the CUDA 12 release, NVIDIA contributed the new packaging structure to `conda-forge`, introducing the same set of packages that existed on the `nvidia` channel as a replacement for the old `cudatoolkit` package on `conda-forge`. +This was done starting with CUDA 12.0 to indicate the breaking nature of these changes compared to the prior CUDA 11.x packaging in `conda-forge`. These packages became the standard mechanism for delivering CUDA conda packages. Due to the scale of the reorganization, the CUDA 12.0, 12.1, and 12.2 releases also involved numerous additional fixes to the packaging structure to better integrate them in the Conda ecosystem. -Due to the number of such changes that were required and the focus on improving the quality of these installations, during this time period no corresponding updates were provided for packages on the nvidia channel. -While the conda-forge and nvidia channel package lists were the same (i.e. the same packages existed in both places with the same core contents like libraries and headers), the nvidia channel did not include many of the incremental fixes made on conda-forge to improve things like symlinks, static library handling, proper package constraints, etc. -As a result, nvidia and conda-forge CUDA packages remained incompatible from CUDA 12.0-12.4. +Due to the number of such changes that were required and the focus on improving the quality of these installations, during this time period no corresponding updates were provided for packages on the `nvidia` channel. +While the `conda-forge` and `nvidia` channel package lists were the same (i.e. the same packages existed in both places with the same core contents like libraries and headers), the `nvidia` channel did not include many of the incremental fixes made on `conda-forge` to improve things like symlinks, static library handling, proper package constraints, etc. +As a result, `nvidia` and `conda-forge` CUDA packages remained incompatible from CUDA 12.0-12.4. ### CUDA 12.5+ -With CUDA 12.5, the nvidia channel was fully aligned with conda-forge. +With CUDA 12.5, the `nvidia` channel was fully aligned with `conda-forge`. Packages on both channels are identical, ensuring safe coexistence of the two channels within the same conda environment. Going forward, the packages on the two channels should be expected to remain compatible. diff --git a/recipe/doc/recipe_guide.md b/recipe/doc/recipe_guide.md index 00200ae..023d914 100644 --- a/recipe/doc/recipe_guide.md +++ b/recipe/doc/recipe_guide.md @@ -1,7 +1,7 @@ # Guide for Maintainers of Recipes That Use CUDA This guide is intended for maintainers of other recipes that depend on CUDA. -It assumes familiarity with the user guides for both [running CUDA code](./end_user_run_guide.md) and [compiling CUDA code](./end_user_compile_guide.md) with conda-forge packages. +It assumes familiarity with the user guides for both [running CUDA code](./end_user_run_guide.md) and [compiling CUDA code](./end_user_compile_guide.md) with `conda-forge` packages. ## Best Practices @@ -52,7 +52,7 @@ For packages that need to support both CUDA major versions 11 & 12, you will nee ## Cross-compilation The CUDA recipes are designed to support cross-compilation. -As such, a number of CUDA components on conda-forge are split into `noarch: generic` component packages that are named according to the supported architecture, rather than being architecture-specific packages. +As such, a number of CUDA components on `conda-forge` are split into `noarch: generic` component packages that are named according to the supported architecture, rather than being architecture-specific packages. The canonical example is [the cuda-nvcc package](https://github.com/conda-forge/cuda-nvcc-feedstock/blob/main/recipe/meta.yaml) that contains the CUDA `nvcc` compiler. This package is split into the `cuda-nvcc` package – which is architecture specific and must be installed on the appropriate target platform (e.g. x86-64 Linux) – and the `cuda-nvcc_${TARGET_PLATFORM}` packages – each of which is architecture-independent and may be installed on any target, but are only suitable for use in compiling code for the specified target platform. @@ -63,7 +63,7 @@ This approach allows using host machines with a single platform to compile code ### Linux -The conda-forge CUDA packages aim to satisfy two sets of constraints. +The `conda-forge` CUDA packages aim to satisfy two sets of constraints. On one hand, the packages aim to retain as similar a structure as possible to the CUDA packages that may be installed via system package manager (e.g. `apt` and `yum`) while supporting cross-compilation. On the other hand, the packages aim to provide a seamless experience at both build time and run time within conda environments. To satisfy the first requirement, all files in CUDA conda packages are installed into the `$PREFIX/targets` directory. @@ -74,10 +74,3 @@ Specifically, we apply the following conventions: - Shared libraries are symlinked into `$PREFIX/lib`. This includes the bare name (`libcublas.so`), the SONAME, and the full name. - Pkgconfig files are installed directly into `$PREFIX/lib/pkgconfig`. These are not symlinked from `$PREFIX/targets`, but are directly moved to this location. The reason is that pkgconfig files contain relative paths to libraries/headers/etc and the paths cannot be accurate relative to both the `targets` directory and the `lib/pkgconfig` directory. Since the latter is what `pkgconfig` will use, we choose to install the files into `lib/pkgconfig` and reroot the paths accordingly. - Static libraries and header files are not symlinked into the sysroot directories. Instead, conda installations of `nvcc` know how to search for these packages in the correct directories. - -### Windows - -Package structure on Windows. -Doesn’t have `x64` directory. - -Library structure, on Windows this would be `%LIBRARY_LIB%` for `.lib` files used during the build `%LIBRARY_BIN%` and `.dll` files used at build time and run time From 685b0e995dcf053ab8dc1228b9867e67bdc21cd9 Mon Sep 17 00:00:00 2001 From: Vyas Ramasubramani Date: Tue, 18 Jun 2024 11:16:17 -0700 Subject: [PATCH 4/6] Apply suggestions from code review Co-authored-by: Leo Fang --- recipe/doc/end_user_compile_guide.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/recipe/doc/end_user_compile_guide.md b/recipe/doc/end_user_compile_guide.md index 4ead1dd..4d1c723 100644 --- a/recipe/doc/end_user_compile_guide.md +++ b/recipe/doc/end_user_compile_guide.md @@ -12,11 +12,11 @@ If you plan to install and build against CUDA packages, you will need to be awar Packages containing libraries (as opposed to compilers or header-only components) follow specific naming conventions. Typically library components of the CUDA Toolkit (CTK) are split into three pieces: the base package, a `*-dev` package, and a `*-static` package. Using [the cuBLAS library](https://github.com/conda-forge/libcublas-feedstock) as an example, we have three different packages: -The base `libcublas` package, which installs the `libcublas.so` library and is sufficient for use if you are simply installing other packages that require cuBLAS at runtime. -The `libcublas-dev` package, which installs additional files like cuBLAS headers and CMake files. -This package should be installed if you wish to compile your own code against cuBlas within a conda environment. -The `libcublas-static` package, which installs the static cuBLAS library. -This library should be installed if you wish to compile your own code against a static cuBLAS within a conda environment. +The base `libcublas` package, which installs the `libcublas.so.X` symlink and the `libcublas.so.X.Y` shared library, is sufficient for use if you are simply installing other packages that require cuBLAS at runtime. +The `libcublas-dev` package installs additional files like cuBLAS headers, the `libcublas.so` symlink, and CMake files. +This package should be installed if you wish to compile your own code dynamically linking against cuBLAS within a conda environment. +The `libcublas-static` package installs the static `libcublas_static.a` library. +This library should be installed if you wish to compile your own code linking against a static cuBLAS within a conda environment. Typically the `*-static` packages will require the `*-dev` packages to be installed in order to provide the necessary packaging (CMake, pkg-config) files to discover the library, but this is not currently enforced by the packages themselves. ## Development Metapackages @@ -24,4 +24,4 @@ Typically the `*-static` packages will require the `*-dev` packages to be instal The above discussion of naming also applies to metapackages. For instance, the `cuda-libraries` package contains all the runtime libraries, while `cuda-libraries-dev` also includes dependencies on the corresponding `*-dev` packages. In addition, for the purposes of development there are a few additional key metapackages: -- `cuda-compiler`: All packages required to compile a minimal CUDA program (one that does not require e.g. extra math libraries like cuBLAS or cuSparse). +- `cuda-compiler`: All packages required to compile a minimal CUDA program (one that does not require e.g. extra math libraries like cuBLAS or cuSPARSE). From 10a93989ce5cd9ea8cb7e5e8e820c85ad57ccb81 Mon Sep 17 00:00:00 2001 From: Vyas Ramasubramani Date: Tue, 18 Jun 2024 18:23:26 +0000 Subject: [PATCH 5/6] Minor edit --- recipe/doc/maintainer_guide.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/recipe/doc/maintainer_guide.md b/recipe/doc/maintainer_guide.md index a79cafb..5a17112 100644 --- a/recipe/doc/maintainer_guide.md +++ b/recipe/doc/maintainer_guide.md @@ -8,8 +8,7 @@ In addition to the standardized dev/static division of libraries, some packages ### nvcc split -The `nvcc` compiler natively supports cross-compilation, i.e. a single host binary can produce binaries compiled for any target platform it supports without requiring a completely separate binutils installation for each target. -However, target-specific headers are still necessary in order to compile suitable code for the given target. +While the `nvcc` compiler natively supports cross-compilation, a target-specific headers are still needed. To support this, the `nvcc` compiler is split into a couple of feedstocks, [`cuda-nvcc`](https://github.com/conda-forge/cuda-nvcc-feedstock/) and [`cuda-nvcc-impl`](https://github.com/conda-forge/cuda-nvcc-impl-feedstock/). These packages split the files such that we can have the compiler package be dependent exclusively on the platform for which it is compiled while the `cuda-nvcc-impl` package is dependent only on the cross-compilation target and includes the required headers (and other files) such that compilation will succeed. This way, the two packages may be updated or changed in parallel and will interoperate properly in cross-compilation environments. From 47b3164511271f11b51b3e4925e6868ff3d58b91 Mon Sep 17 00:00:00 2001 From: Bradley Dice Date: Tue, 18 Jun 2024 13:45:36 -0500 Subject: [PATCH 6/6] Apply suggestions from code review --- recipe/doc/end_user_run_guide.md | 10 +++++----- recipe/doc/recipe_guide.md | 12 ++++++------ 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/recipe/doc/end_user_run_guide.md b/recipe/doc/end_user_run_guide.md index 1406f49..bd3027e 100644 --- a/recipe/doc/end_user_run_guide.md +++ b/recipe/doc/end_user_run_guide.md @@ -51,11 +51,11 @@ For example, to install just `libcublas` and `libcusparse` one may run: ``` conda install -c conda-forge libcublas libcusparse cuda-version= ``` -The best way to get a current listing is to run +The best way to get a current listing is to run: ``` conda install --dry-run -c conda-forge cuda cuda-version= ``` -For a complete listing of the packages that were originally created, see [this issue](https://github.com/conda-forge/staged-recipes/issues/21382) +For a complete listing of the packages that were originally created, see [this issue](https://github.com/conda-forge/staged-recipes/issues/21382). ### Metapackages @@ -63,9 +63,9 @@ Existing conda documentation: https://docs.nvidia.com/cuda/cuda-installation-gui For convenience, a number of additional metapackages are available: - `cuda-runtime`: All CUDA runtime libraries needed to run a CUDA application -- `cuda-libraries`: All libraries required to run a CUDA application requiring libraries beyond the CUDA runtime (such as the CUDA math libraries) as well as packages needed to perform JIT compilation. +- `cuda-libraries`: All libraries required to run a CUDA application requiring libraries beyond the CUDA runtime (such as the CUDA math libraries) as well as packages needed to perform JIT compilation - `cuda-visual-tools`: GUIs for visualizing and profiling such as Nsight Compute -- `cuda-command-line-tools`: Command line tools for analyzing and profiling such as cupti, cuda-gdb, and Compute Sanitizer. +- `cuda-command-line-tools`: Command line tools for analyzing and profiling such as cupti, cuda-gdb, and Compute Sanitizer - `cuda-tools`: All tools for analyzing and profiling, both GUI (includes cuda-visual-tools) and CLI (includes cuda-command-line-tools) ### CUDA C++ Core Libraries (CCCL) @@ -103,7 +103,7 @@ As a result, `nvidia` and `conda-forge` CUDA packages remained incompatible from With CUDA 12.5, the `nvidia` channel was fully aligned with `conda-forge`. Packages on both channels are identical, ensuring safe coexistence of the two channels within the same conda environment. -Going forward, the packages on the two channels should be expected to remain compatible. +Going forward, CUDA packages on the `conda-forge` and `nvidia` channels should be expected to remain compatible. ## FAQ diff --git a/recipe/doc/recipe_guide.md b/recipe/doc/recipe_guide.md index 023d914..330494d 100644 --- a/recipe/doc/recipe_guide.md +++ b/recipe/doc/recipe_guide.md @@ -25,9 +25,9 @@ As an example, consider that you have built a package that requires `libcublas`: ```yaml requirements: build: - - compiler('cuda') + - {{ compiler('cuda') }} host: - - libcublas + - libcublas-dev - cuda-version=12.4 ``` @@ -42,8 +42,8 @@ build: requirements: run: - # Since we’ve ignored the run export, pin manually, but set the min to just "x" since we support any libcublas within the same major release, including older versions - - pin_compatible("libcublas", min_pin="x", max_pin="x") + # Since we've ignored the run export, we pin manually, but set the min to "x" since we support any libcublas within the same major release, including older versions + - {{ pin_compatible("libcublas", min_pin="x", max_pin="x") }} ``` For packages that need to support both CUDA major versions 11 & 12, you will need to use selectors and/or Jinja tricks to separate out the requirements for CUDA 11 and CUDA 12. [cupy-feedstock](https://github.com/conda-forge/cupy-feedstock) offers a good example. @@ -54,8 +54,8 @@ For packages that need to support both CUDA major versions 11 & 12, you will nee The CUDA recipes are designed to support cross-compilation. As such, a number of CUDA components on `conda-forge` are split into `noarch: generic` component packages that are named according to the supported architecture, rather than being architecture-specific packages. The canonical example is [the cuda-nvcc package](https://github.com/conda-forge/cuda-nvcc-feedstock/blob/main/recipe/meta.yaml) that contains the CUDA `nvcc` compiler. -This package is split into the `cuda-nvcc` package – which is architecture specific and must be installed on the appropriate target platform (e.g. -x86-64 Linux) – and the `cuda-nvcc_${TARGET_PLATFORM}` packages – each of which is architecture-independent and may be installed on any target, but are only suitable for use in compiling code for the specified target platform. +This package is split into the `cuda-nvcc` package -- which is architecture specific and must be installed on the appropriate target platform (e.g. +x86-64 Linux) -- and the `cuda-nvcc_${TARGET_PLATFORM}` packages -- each of which is architecture-independent and may be installed on any target, but are only suitable for use in compiling code for the specified target platform. This approach allows using host machines with a single platform to compile code for multiple platforms.