Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu' #13

Closed
BeyondStars opened this issue Nov 22, 2022 · 29 comments

Comments

@BeyondStars
Copy link

Hi, thank you for your codebase. While I set the environment for torch-persistent-homology in Win10 system, I met many problems. So I changed to a Ubuntu server. I read the issues in https://github.com/ExpectationMax/torch_persistent_homology/issues/1, so I use Anaconda3 to bulid a Python3.8 virtual envrionment and installed torch==1.12.1, torch-geometric==2.1.0, scipy==1.7.3. Then I installed torch-persistent-homology with the command pip install . , it goes well. And I checked my environment with conda list, it's listed up there. But when I trid to test it with
>>> from torch_persistent_homology.persistent_homology_cpu import compute_persistence_homology_batched_mt
The result comes to an error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu'
I don't know why I still can't use it, so I would like to ask for your help, thank you.

@Pseudomanifold
Copy link
Collaborator

Thanks for reaching out! This looks like a problem with the current environment. Can you check whether the module shows up with pip install?

@BeyondStars
Copy link
Author

you mean pip install ., with a . in the end?

@Pseudomanifold
Copy link
Collaborator

I actually meant pip list, sorry. When you say that it shows up with conda list, do you mean that the environment shows up? Does pip install raise an error?

@BeyondStars
Copy link
Author

BeyondStars commented Nov 22, 2022

When I use pip install . to install the module, it goes very well. It tells me

Successfully built torch-persistent-homology
Installing collected packages: torch-persistent-homology
Successfully installed torch-persistent-homology-0.1.0

and I checked with both conda list and pip list, it both shows up. In conda list :

torch-persistent-homology 0.1.0                    pypi_0    pypi

In pip list :

torch-persistent-homology 0.1.0

@Pseudomanifold
Copy link
Collaborator

Can you import the parent module, i.e. does import torch_persistent_homology work?

@BeyondStars
Copy link
Author

It works.

(TOGLpy38) fjj@ubuntu:~/TOGL/repos/torch_persistent_homology$ python
Python 3.8.0 (default, Nov  6 2019, 21:49:08) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch_persistent_homology
>>> from torch_persistent_homology.persistent_homology_cpu import compute_persistence_homology_batched_mt
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu'
>>> import torch_persistent_homology.persistent_homology_cpu
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu'

@Pseudomanifold
Copy link
Collaborator

What does dir(torch_persistent_homology) do? I have a hunch that this was not installed correctly after all.

@BeyondStars
Copy link
Author

I'm not sure what you mean. You mean use it in the Python concole?

>>> import torch_persistent_homology
>>> dir(torch_persistent_homology)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__']

@Pseudomanifold
Copy link
Collaborator

Yes, of course. You can see that the module was not installed correctly—addition submodules are missing. Please repeat the installation of the module, make sure that no cache is being used, and/or try to build the module from source. Sorry for the hassle.

@BeyondStars
Copy link
Author

OK, when I install the module in Win10 I met many problems with poetry, so I didn't use poetry in Ubuntu server at the first time and just installed some necessary modules seperately before I install torch-persistent-homology, I'll try this way, thank you for your help.

@Pseudomanifold
Copy link
Collaborator

You can use the module without poetry. poetry is just a way to ensure that everyone gets the same dependencies. In fact, an ordinary pip install . in the torch-persistent-homology repository should work (potentially without cache dir etc.).

(I would not try to install anything under Windows, though—I have no clue how torch-geometric and other dependencies work there)

I'll close this issue for now, please reopen one in the other repo.

@uchukwu
Copy link

uchukwu commented Feb 26, 2024

Hi, I was introduced to this repo through a collaborator. I've attempted to clone TOGL and the submodule (torch-persistent-homology), and those were done with no problem. I'm working within a virtual environment on a MacOS, and executed a poetry install from the top-level directory of TOGL followed by a pip install of torch-sparse==0.6.12, torch-cluster==1.5.9, torch-spline-conv==1.2.1, and torch-scatter==2.0.8, which are all compatible with torch==1.8.1. Following that, I attempted to install torch-persistence homology via pip install -v .. It shows in pip list. However, when entering a python shell and executing from torch_persistent_homology.persistent_homology_cpu import compute_persistence_homology_batched_mt, I get the same error as shown in this issue: ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu'.

Additional note: dir(torch_persistent_homology), shows ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__']. I tried reinstalling with no cache to no avail. Would very much appreciate if you could provide guidance on this installation. I've been working on this installation for about three days now.

@Pseudomanifold
Copy link
Collaborator

Has the submodule been cloned correctly? This looks like nothing was installed. I have started a new implementation in pytorch-topological (https://github.com/aidos-lab/pytorch-topological/blob/main/torch_topological/nn/graphs.py) that might also be of interest.

@uchukwu
Copy link

uchukwu commented Feb 26, 2024

Thanks for the quick response. Here's the clone command I executed, after changing directory into the empty torch_persistent_homology directory:

git submodule init, followed by git submodule update

The last command outputs:

Cloning into '~/Documents/Quantopo/repos/TOGL/repos/torch_persistent_homology'... Submodule path './': checked out 'eea88014ba6acd72f5003680afd7a26b4057ce23'

Following that, I executed pip install -v . from the cloned directory and it ended with Successfully installed torch-persistent-homology-0.1.0. There was a lot of verbose during that install, but it ended with that successful status. Additionally, I don't see any shared object (.so) files for the compiled C++ file after that install. Is there suppose to be a generated shared object file following that install? Is there something else missing?

Also, (1) is pytorch-topological an alternative to TOGL and (2) does it work with GNNs as done in TOGL? Much appreciated.

@uchukwu
Copy link

uchukwu commented Feb 26, 2024

Just seeing the code you linked. So it looks like the TOGL class in graphs.py can be an alternative to the TOGL Github repo, right?

@Pseudomanifold
Copy link
Collaborator

So, .so files are placed in your local Python library, i.e. wherever pip installs things for you. Can you verify that the module still does not exist?

pytorch-topological is more than just TOGL, but the implementation in graphs.py is not a complete replacement of TOGL yet—help is welcome!

@uchukwu
Copy link

uchukwu commented Feb 26, 2024

Did a pip uninstall torch-persistent-homology and then pip install --nocache-dir . under the torch_persistent_homology directory. I get the following output:

Processing ~/Documents/Quantopo/repos/TOGL/repos/torch_persistent_homology
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Requirement already satisfied: torch<2.0.0,>=1.7.1 in ~/.pyenv/versions/3.8.6/envs/TOGL_venv/lib/python3.8/site-packages (from torch-persistent-homology==0.1.0) (1.8.1)
Requirement already satisfied: numpy in ~/.pyenv/versions/3.8.6/envs/TOGL_venv/lib/python3.8/site-packages (from torch<2.0.0,>=1.7.1->torch-persistent-homology==0.1.0) (1.21.6)
Requirement already satisfied: typing-extensions in ~/.pyenv/versions/3.8.6/envs/TOGL_venv/lib/python3.8/site-packages (from torch<2.0.0,>=1.7.1->torch-persistent-homology==0.1.0) (4.2.0)
Building wheels for collected packages: torch-persistent-homology
  Building wheel for torch-persistent-homology (PEP 517) ... done
  Created wheel for torch-persistent-homology: filename=torch_persistent_homology-0.1.0-cp38-cp38-macosx_14_0_x86_64.whl size=5988 sha256=e5ef4122357c08b6ffc030fa2fb331cc6d9734c80dbfe61da99c9ea343ebf8fb
  Stored in directory: /private/var/folders/s5/nw5tmhys4dxbjvwyls5lg2hm0000gn/T/pip-ephem-wheel-cache-y8fvkif0/wheels/e6/97/7f/f0d25e6b959124443fd3d1a44966477a0eed6983ac5ae09a2d
Successfully built torch-persistent-homology
Installing collected packages: torch-persistent-homology
Successfully installed torch-persistent-homology-0.1.0

Following that, I enter into python shell and attempt to import from torch_persistent_homology.persistent_homology_cpu import compute_persistence_homology_batched_mt, and get the same error: ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu'.

Struggling with pinpointing the issue. I'm also not seeing the shared object file in my virtual environment Python library under ~/.pyenv/versions/3.8.6/envs/TOGL_venv/lib/python3.8/site-packages, which I believe is expected to be.

@Pseudomanifold
Copy link
Collaborator

And where's the file torch_persistent_homology-0.1.0-cp38-cp38-macosx_14_0_x86_64.whl? This is a super weird issue. Can you try cloning the torch_persistent_homology repository and installing it stand-alone?

@uchukwu
Copy link

uchukwu commented Feb 27, 2024

I don't see a wheel file produced anywhere. I've also tried cloning the torch_persistent_homology repo and installing with poetry install to no avail. Is there suppose to be a setup.py file (in torch_persistent_homology submodule) that assists with building the wheel file, the C++ extension and/or .so file (following compilation of cpp file), if I understand correctly? I see a build.py defined as below; it has a build function with setup logic in it, but I'm not sure if that function or file is ever invoked/executed in the install process:

from setuptools import setup, Extension
from torch.utils import cpp_extension

torch_library_paths = cpp_extension.library_paths(cuda=False)


def build(setup_kwargs):
    """
    This function is mandatory in order to build the extensions.
    """
    setup_kwargs.update({
        'ext_modules': [
            cpp_extension.CppExtension(
                'torch_persistent_homology.persistent_homology_cpu',
                ['torch_persistent_homology/perisistent_homology_cpu.cpp'],
                extra_link_args=[
                    '-Wl,-rpath,' + library_path
                    for library_path in torch_library_paths]
            )
        ],
        'cmdclass': {
            'build_ext': cpp_extension.BuildExtension
        }
    })

As you can see, the module setup is never used and build is never called within the file. Not sure if this may be part of the issue.

@uchukwu
Copy link

uchukwu commented Feb 27, 2024

To my understanding, pip install should invoke setuptools library to process a setup.py file. But I don't see one in this repo; just the build.py file, and I don't see how this gets used, if it's supposed to be used during installation.

@Pseudomanifold
Copy link
Collaborator

New versions of pip can make use of the pyproject.toml stuff, which references build.py, so that part is okay. Nevertheless, you should be able to run poetry build in the torch_persistent_homology repo. I have no Python 3.8 machine at the moment to test this—maybe @ExpectationMax can comment further?

@uchukwu
Copy link

uchukwu commented Feb 27, 2024

This was the screen output when executing pip install, which shows a wheel file was created, but it doesn't exist when I try to go to that location:

Processing ~/Documents/Quantopo/repos/TOGL/repos/torch_persistent_homology
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: torch<2.0.0,>=1.7.1 in ~/.pyenv/versions/3.8.6/envs/TOGL_venv/lib/python3.8/site-packages (from torch-persistent-homology==0.1.0) (1.8.1)
Requirement already satisfied: typing-extensions in ~/.pyenv/versions/3.8.6/envs/TOGL_venv/lib/python3.8/site-packages (from torch<2.0.0,>=1.7.1->torch-persistent-homology==0.1.0) (4.2.0)
Requirement already satisfied: numpy in ~/.pyenv/versions/3.8.6/envs/TOGL_venv/lib/python3.8/site-packages (from torch<2.0.0,>=1.7.1->torch-persistent-homology==0.1.0) (1.21.6)
Building wheels for collected packages: torch-persistent-homology
  Building wheel for torch-persistent-homology (pyproject.toml) ... done
  Created wheel for torch-persistent-homology: filename=torch_persistent_homology-0.1.0-cp38-cp38-macosx_14_0_x86_64.whl size=5988 sha256=e5ef4122357c08b6ffc030fa2fb331cc6d9734c80dbfe61da99c9ea343ebf8fb
  Stored in directory: /private/var/folders/s5/nw5tmhys4dxbjvwyls5lg2hm0000gn/T/pip-ephem-wheel-cache-5sbvjo9d/wheels/e6/97/7f/f0d25e6b959124443fd3d1a44966477a0eed6983ac5ae09a2d
Successfully built torch-persistent-homology
Installing collected packages: torch-persistent-homology
Successfully installed torch-persistent-homology-0.1.0

However, I followed up with executing poetry build. It creates a dist directory under torch_persistent_homology top directory and within it is the wheel file, alongside a tar.gz file. Am I suppose to do something with that? FYI, poetry build print this to screen, but it looks like it completed successfully:

Preparing build environment with build-system requirements poetry-core>=1.0.0, setuptools, torch
Building torch-persistent-homology (0.1.0)
/var/folders/s5/nw5tmhys4dxbjvwyls5lg2hm0000gn/T/tmpf8gr4tkl/.venv/lib/python3.8/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:84.)
  device: torch.device = torch.device(torch._C._get_default_device()),  # torch.device('cpu'),

I've tried the import again and still get same error:

>>> from torch_persistent_homology.persistent_homology_cpu import compute_persistence_homology_batched_mt
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu'
>>> 

@Pseudomanifold
Copy link
Collaborator

OK! I think the build is not running in this case....let's see what @ExpectationMax says, or move to pytorch-topological.

@uchukwu
Copy link

uchukwu commented Feb 28, 2024

Ok, thanks. Will wait for his response. Myself and my collaborators are interested in pytorch-topological as well. We'll start looking into that project soon.

@uchukwu
Copy link

uchukwu commented Mar 4, 2024

Hi @Pseudomanifold, few questions about pytorch-topological:

  1. Does the code employ a family of k vertex filtrations as shown in Figure 2 of the paper Topological Graph Neural Networks? If so, does this implementation include the node map to k views, as shown in Figure 2b-c?
  2. Is the node mapping and filtration in Figure 2b-e where the C++ code in persistent_homology_cpu in the TOGL Github project being used?
  3. What is needed in pytorch-topological to make it fully work as in paper? Is it listed here?

@Pseudomanifold
Copy link
Collaborator

We should move this conversation to pytorch-topological for simplicity.

ad 1: Yes.
ad 2: No, I am not using the persistent_homology_cpu package for this.
ad 3: The features that are described in the issue are not needed to run TOGL; in the paper, we were also restricted to the same setting. The issue refers to the fact that an improved representation should ideally handle all these aspects.

@BeyondStars
Copy link
Author

Hi, since my research plan have changed, I start to reactivate this program now. This time I follow ReadMe strictly to install the environment on Linux server, but I still meet the same problem.

ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu'

First I create a conda virtual environment named PoetryEnv with python=3.8, then I install pipx and poetry using command

pip install pipx
pipx install poetry

then I activate this environment, enter the folder TOGL, and command

poetry install
poetry run install_deps_cu11

it goes well, but when I run train_model.py, it report errors.

Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/lh/TOGL/topognn/data_utils.py", line 17, in <module>
from torch_geometric.data import Data
File "/home/lh/miniconda3/envs/PoetryEnv/lib/python3.8/site-packages/torch_geometric/init.py", line 2, in <module>
import torch_geometric.nn
File "/home/lh/miniconda3/envs/PoetryEnv/lib/python3.8/site-packages/torch_geometric/nn/init.py", line 2, in <module>
from .data_parallel import DataParallel
File "/home/lh/miniconda3/envs/PoetryEnv/lib/python3.8/site-packages/torch_geometric/nn/data_parallel.py", line 5, in <module>
from torch_geometric.data import Batch
File "/home/lh/miniconda3/envs/PoetryEnv/lib/python3.8/site-packages/torch_geometric/data/init.py", line 1, in <module>
from .data import Data
File "/home/lh/miniconda3/envs/PoetryEnv/lib/python3.8/site-packages/torch_geometric/data/data.py", line 8, in <module>
from torch_sparse import coalesce, SparseTensor
File "/home/lh/miniconda3/envs/PoetryEnv/lib/python3.8/site-packages/torch_sparse/init.py", line 12, in <module>
torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
File "/home/lh/miniconda3/envs/PoetryEnv/lib/python3.8/site-packages/torch/_ops.py", line 104, in load_library
ctypes.CDLL(path)
File "/home/lh/miniconda3/envs/PoetryEnv/lib/python3.8/ctypes/init.py", line 373, in init
self._handle = _dlopen(self._name, mode)
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory

it seems that the poetry dependency doesn't contain CUDA, and at the same time I found that in poetry dependency, it requires torch==1.8.1 and installed it automatically,

torch = [
    {file = "torch-1.8.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:f23eeb1a48cc39209d986c418ad7e02227eee973da45c0c42d36b1aec72f4940"},
    {file = "torch-1.8.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:4ace9c5bb94d5a7b9582cd089993201658466e9c59ff88bd4e9e08f6f072d1cf"},
    {file = "torch-1.8.1-cp36-cp36m-win_amd64.whl", hash = "sha256:6ffa1e7ae079c7cb828712cb0cdaae5cc4fb87c16a607e6d14526b62c20bcc17"},
    {file = "torch-1.8.1-cp36-none-macosx_10_9_x86_64.whl", hash = "sha256:16f2630d9604c4ee28ea7d6e388e2264cd7bc6031c6ecd796bae3f56b5efa9a3"},
    {file = "torch-1.8.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:95b7bbbacc3f28fe438f418392ceeae146a01adc03b29d44917d55214ac234c9"},
    {file = "torch-1.8.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:55137feb2f5a0dc7aced5bba690dcdb7652054ad3452b09a2bbb59f02a11e9ff"},
    {file = "torch-1.8.1-cp37-cp37m-win_amd64.whl", hash = "sha256:8ad2252bf09833dcf46a536a78544e349b8256a370e03a98627ebfb118d9555b"},
    {file = "torch-1.8.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:1388b30fbd262c1a053d6c9ace73bb0bd8f5871b4892b6f3e02d1d7bc9768563"},
    {file = "torch-1.8.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:e7ad1649adb7dc2a450e70a3e51240b84fa4746c69c8f98989ce0c254f9fba3a"},
    {file = "torch-1.8.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:3e4190c04dfd89c59bad06d5fe451446643a65e6d2607cc989eb1001ee76e12f"},
    {file = "torch-1.8.1-cp38-cp38-win_amd64.whl", hash = "sha256:5c2e9a33d44cdb93ebd739b127ffd7da786bf5f740539539195195b186a05f6c"},
    {file = "torch-1.8.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:c6ede2ae4dcd8214b63e047efabafa92493605205a947574cf358216ca4e440a"},
    {file = "torch-1.8.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:ce7d435426f3dd14f95710d779aa46e9cd5e077d512488e813f7589fdc024f78"},
    {file = "torch-1.8.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:a50ea8ed900927fb30cadb63aa7a32fdd59c7d7abe5012348dfbe35a8355c083"},
    {file = "torch-1.8.1-cp39-cp39-win_amd64.whl", hash = "sha256:dac4d10494e74f7e553c92d7263e19ea501742c4825ddd26c4decfa27be95981"},
    {file = "torch-1.8.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:225ee4238c019b28369c71977327deeeb2bd1c6b8557e6fcf631b8866bdc5447"},
]

but in deps.py, it requires torch==1.7.0.

WHEELS = [
    "https://pytorch-geometric.com/whl/torch-1.7.0/torch_cluster-latest+{cuda}-{python}-{platform}.whl",
    "https://pytorch-geometric.com/whl/torch-1.7.0/torch_scatter-latest+{cuda}-{python}-{platform}.whl",
    "https://pytorch-geometric.com/whl/torch-1.7.0/torch_sparse-latest+{cuda}-{python}-{platform}.whl",
    "https://pytorch-geometric.com/whl/torch-1.7.0/torch_spline_conv-latest+{cuda}-{python}-{platform}.whl",
]

so I downgrade torch and install CUDA with

conda install pytorch==1.7.0 -c pytorch
conda install cudatoolkit==11.0.*
conda install cudnn

then there is a version conflict between numpy and scipy, I downgrade numpy==1.21.6, and then I get the same problem.

Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/lh/TOGL/topognn/models.py", line 15, in <module>
    from topognn.layers import GCNLayer, GINLayer, GATLayer, SimpleSetTopoLayer, fake_persistence_computation#, EdgeDropout
  File "/home/lh/TOGL/topognn/layers.py", line 8, in <module>
    from torch_persistent_homology.persistent_homology_cpu import compute_persistence_homology_batched_mt
ModuleNotFoundError: No module named 'torch_persistent_homology.persistent_homology_cpu'

I trid to use two way to reinstall this package, but they don't work.

poetry add /home/lh/TOGL/repos/torch_persistent_homology
 lh@server:~/TOGL/repos/torch_persistent_homology$ pip install .

So I can only ask for your help, thank you.

@Pseudomanifold
Copy link
Collaborator

Hi,

I think it's easiest to use the new implementation here that I described in this comment: #13 (comment)

@BeyondStars
Copy link
Author

OK, it's also my alternative option, thank you again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants