Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cuda-compatible installation without explicit use of --global-option #1

Closed
fuhuifang opened this issue Sep 29, 2021 · 3 comments
Closed

Comments

@fuhuifang
Copy link

We are trying to use cuda-compatible pytorch-extension in a conda/docker environment, with a conda environment file like this:

dependencies:
  - python=3.7
  - pip=20.2.4
  - pip:
    - numpy==1.21.2 
    - torch==1.7.1+cu110 
    - typing-extensions==3.10.0.2 
    - pytorch-extension==0.2 --global-option "cpp_ext" --global-option "cuda_ext"

However, we got UserWarning of disabling all use of wheels due to use of --global-option and it could only use .tar.gz for all dependencies. Many packages (including numpy, cuda-compatible pytorch, etc.) do not have sdist support, so we saw "no matching distribution" error for those packages.

My question: is there a way to build cuda-compatible pytorch-extension without explicit use of --global-option "cpp_ext" --global-option "cuda_ext"?

@artitw
Copy link
Owner

artitw commented Oct 2, 2021

Thanks for posting this issue. Have you gotten the setup working when manually installing the dependencies in a local environment? If so, I think the issue might be limitations with conda, and would therefore recommend using docker instead.

@fuhuifang
Copy link
Author

Update: we’re unblocked by building pytorch-extension in the docker image and installing the remaining packages in a conda environment on top of it.

@artitw
Copy link
Owner

artitw commented Nov 10, 2021

What a splendid solution to isolate the conda issues from pytorch-extension. Thanks for sharing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants