Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Package installation throws errors #9

Closed
n-gao opened this issue May 11, 2022 · 3 comments
Closed

Package installation throws errors #9

n-gao opened this issue May 11, 2022 · 3 comments

Comments

@n-gao
Copy link
Contributor

n-gao commented May 11, 2022

There were several issues prohibiting the installation of gemnet as package:

  • The setup.py file referred to a gemnet_pytorch package even though the folder name is gemnet
  • python, cudatoolkit, pytorch and torch_geometric are invalid pip packages.

Most of these should be fixed by 1922774

@gasteigerjo
Copy link
Contributor

Thank you!

Rather than deleting the pytorch and torch_geometric dependencies we should specify the correct pip names. Especially the version number can be important.

Similarly for the Python version, which should say something like python_version>=3.8.

@n-gao
Copy link
Contributor Author

n-gao commented May 30, 2022

Good point! If one wants to install torch and torch_geometric via pip install -r requirements.txt the CUDA version must be fixed which may not be desired.
For instance, for CUDA 10.2 one could add

torch==1.10

However, for CUDA 11.3 it must be

--extra-index-url https://download.pytorch.org/whl/cu113
torch==1.10

For torch_geometric it gets more complicated with the CUDA version since you also need the torch version.

I am not sure whether PEP508 is the correct solution regarding the python version. It more or less makes the dependence of a package depending on the python version but if you want to fix the python version you should require a minimum and maximum in your setup.py, see python_requires.

@gasteigerjo
Copy link
Contributor

gasteigerjo commented May 30, 2022

Fixed in 0110dce.

I don't think we need to specify CUDA at all. Imho, the user is responsible for this if they want to use CUDA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants