-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding pyflamegpu #24483
base: main
Are you sure you want to change the base?
Adding pyflamegpu #24483
Conversation
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/pyflamegpu:
For recipes/pyflamegpu:
For recipes/pyflamegpu-no_seatbelts:
For recipes/pyflamegpu-no_seatbelts:
For recipes/pyflamegpu-visualisation:
For recipes/pyflamegpu-visualisation:
For recipes/pyflamegpu-visualisation-no_seatbelts:
For recipes/pyflamegpu-visualisation-no_seatbelts:
|
Not yet addressed sha, as development
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/pyflamegpu:
For recipes/pyflamegpu:
For recipes/pyflamegpu-no_seatbelts:
For recipes/pyflamegpu-no_seatbelts:
For recipes/pyflamegpu-visualisation:
For recipes/pyflamegpu-visualisation:
For recipes/pyflamegpu-visualisation-no_seatbelts:
For recipes/pyflamegpu-visualisation-no_seatbelts:
|
@conda-forge-admin, please ping conda-forge/staged-recipes Questions primarily relate to CUDA and build variants/metapackages, this is a Python/C++ hybrid package that uses CUDA. I've been working through creating recipes, and have run into a few questions (although some are more general). I don't believe any of these are answered in the knowledge base
Our package relies on CUDA, and hence tests specified in
|
Hi! This is the friendly automated conda-forge-webservice. I was asked to ping @conda-forge/staged-recipes and so here I am doing that. |
Before this change, tested that package fully builds on Linux with tests commented out.
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/pyflamegpu:
For recipes/pyflamegpu-no_seatbelts:
For recipes/pyflamegpu-visualisation:
For recipes/pyflamegpu-visualisation-no_seatbelts:
|
Hi! This is the friendly automated conda-forge-webservice. I was asked to ping @conda-forge/staged-recipes and so here I am doing that. |
1 similar comment
Hi! This is the friendly automated conda-forge-webservice. I was asked to ping @conda-forge/staged-recipes and so here I am doing that. |
I agree to be a maintainer. |
You can optionally add keywords to the build string of each build variant in order to make it easier to see which variant is installed in an environment. i.e.
Our infrastructure creates separate build jobs for each CUDA version. We have a tool called conda-smithy which parses recipes and decided which build variants to automatically insert into the build matrix. The same process is used for building for multiple python versions.
We do not offer build runners with GPUs, so you need to skip any tests that require loading the CUDA driver.
The build runners do not have any GPUs. Ensure that you are using our CMAKE_ARGS environment variable and the compilers that are set to the environment variables CC, CXX, etc. Also, only the LINUX runner is setup for CUDA here in staged recipes. There is no runner for Windows + CUDA until your recipe is given it's own feedstock. |
Here in staged recipes, the package will only be built against CUDA 11.2 on linux. Once in the feedstock, it will be rerendered and then it will be built against 11.2, 11.8, and 12.x. Here are some examples that are similar to your recipes: #23415 -> https://github.com/conda-forge/carterbox-torch-radon-feedstock |
# https://docs.conda.io/projects/conda-build/en/3.21.x/resources/compiler-tools.html#using-your-customized-compiler-package-with-conda-build-3 | ||
# https://github.com/rapidsai/cuml/blob/branch-23.12/conda/recipes/libcuml/conda_build_config.yaml | ||
# https://github.com/conda-forge/conda-forge-pinning-feedstock/blob/main/recipe/conda_build_config.yaml | ||
cuda_compiler: | ||
- cuda-nvcc # requires 'conda build -c nvidia conda' on Windows | ||
#- nvcc # requires 'conda build -c nvidia conda' on Windows | ||
cuda_compiler_version: | ||
#- 11.2 # [(linux or win64) and os.environ.get("CF_CUDA_ENABLED", "False") == "True"] | ||
- 12.0 # [(linux or win64) and os.environ.get("CF_CUDA_ENABLED", "False") == "True"] | ||
- 12.0 | ||
cuda_compiler_version_min: | ||
#- 11.2 # [linux or win64] | ||
- 12.0 # [linux or win64] | ||
- 12.0 | ||
|
||
c_compiler: # [win] | ||
- vs2022 # [win] | ||
cxx_compiler: # [win] | ||
- vs2022 # [win] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete this file. CUDA and compiler variants are managed by our infrastructure.
missing_dso_whitelist: | ||
- $RPATH/libcuda.so.1 # [linux] Ignore as this comes from CUDA driver | ||
- $RPATH/libnvrtc.so.12 # [linux] Conda-build can't find this in run reqs cuda-cudart | ||
- $RPATH/libcudart.so.12 # [linux] Conda-build can't find this in run reqs cuda-nvrtc | ||
- $RPATH/nvrtc64_120_0.dll # [win64] Conda-build can't find this in run reqs cuda-cudart | ||
- $RPATH/cudart64_12.dll # [win64] Conda-build can't find this in run reqs cuda-nvrtc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing_dso_whitelist: | |
- $RPATH/libcuda.so.1 # [linux] Ignore as this comes from CUDA driver | |
- $RPATH/libnvrtc.so.12 # [linux] Conda-build can't find this in run reqs cuda-cudart | |
- $RPATH/libcudart.so.12 # [linux] Conda-build can't find this in run reqs cuda-nvrtc | |
- $RPATH/nvrtc64_120_0.dll # [win64] Conda-build can't find this in run reqs cuda-cudart | |
- $RPATH/cudart64_12.dll # [win64] Conda-build can't find this in run reqs cuda-nvrtc |
These should not be needed because in staged-recipes there is only one CUDA variant available: CUDA 11.2.
build: | ||
number: 0 | ||
script_env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
build: | |
number: 0 | |
script_env: | |
build: | |
number: 0 | |
skip: true # [cuda_compiler_version == "None"] | |
skip: true # [py < 310] | |
script_env: |
Skip building the default non-cuda variant forever. Skip lower python versions for faster debugging for now, then unskip later.
ignore_run_exports: | ||
- python # [linux] Not clear why conda thinks this isn't used | ||
- astpretty # [linux, win64] Not clear why conda thinks this isn't used | ||
- vc14_runtime # [win64] Not clear why conda thinks this isn't used (we aren't static linking) | ||
- ucrt # [win64] Not clear why conda thinks this isn't used (we aren't static linking) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignore_run_exports: | |
- python # [linux] Not clear why conda thinks this isn't used | |
- astpretty # [linux, win64] Not clear why conda thinks this isn't used | |
- vc14_runtime # [win64] Not clear why conda thinks this isn't used (we aren't static linking) | |
- ucrt # [win64] Not clear why conda thinks this isn't used (we aren't static linking) |
This is not the purpose of ignore_run_exports. You should only add host packages here that you know are NOT being used at runtime. e.g. You are using a C library that has some header-only APIs.
- cuda-cudart-dev # [linux] | ||
- cuda-nvrtc-dev # [linux] | ||
- libcurand-dev # [linux] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- cuda-cudart-dev # [linux] | |
- cuda-nvrtc-dev # [linux] | |
- libcurand-dev # [linux] | |
{% if cuda_major == 12 %} | |
- cuda-cudart-dev # [linux] | |
- cuda-nvrtc-dev # [linux] | |
- libcurand-dev # [linux] | |
{% endif %} |
These packages are only for CUDA >= 12. Our package structure changed for CUDA 12. Before that, everything was shipped as one monolithic cudatoolkit package.
Also, these are libraries, so they belong in the HOST requirements section.
- wheel | ||
- python-build | ||
run: | ||
- cuda-version >=12.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- cuda-version >=12.0 |
The cuda version is run_exported by our cuda compiler package.
license_family: MIT | ||
license_file: LICENSE.md | ||
doc_url: https://docs.flamegpu.com/ | ||
dev_url: https://github.com/FLAMEGPU/FLAMEGPU2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like there are separate CXX and Python APIs? Can you build the CXX libraries first, then link the Python modules to them dynamically?
Checklist
url
) rather than a repo (e.g.git_url
) is used in your recipe (see here for more details).