Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative (faster) approach for constructing SimplexTree in TOGL example code. #29

Closed
zexhuang opened this issue Apr 26, 2023 · 3 comments

Comments

@zexhuang
Copy link

Dear Bastian,

Thank your so much for maintaining and actively improving this library, your previous works, TOGL and GFL, have given me so many inspirations!

I haven been using the TOGL example code to build Topol model for 3D point cloud analysis. Although gudhi's Tree Simplex (line 204) allows for constructing general simplices, this method are computationally prohibitive when handling large-scale point clouds (N > 100k).

I also check the library from one of your colleagues, but there is no suitable method to construct the abstract simplicial complexes as the Tree Simplex.

Could you recommend any other computationally feasible way to compute the persistent homology based on the graph filtration?

Regrads,
Zexian.

@Pseudomanifold
Copy link
Contributor

Dear Zexian,

Thanks for your kind words! Unfortunately, there are no quick routines available for building such graphs/simplicial complexes at the moment. What you could try is use something like nglpy to obtain faster 1-skeletons, then expand them by yourself.

What you are mentioning is indeed a relevant use case, and I have thought about additional measures. One thing I would like to eventually integrate into pytorch-topological is this approach on distributed topology. Maybe you can find some inspiration in there. I am very happy to assist you in integrating this into the repository here!

Hope that helps!

@zexhuang
Copy link
Author

zexhuang commented May 3, 2023

Hi Bastian,

Thanks you so much for your responses and suggestions, I will definitely look into those resources.

Since I am working with n-dim point clouds, hence projecting the edge features from Persistent Digrams (PDs) back to the edges themselves (unlike the ring graph examples in your TOGL and GIF papers) is not necessary.

If the above statement is true, then I only care about the 0-dim features (connected components) of point clouds in PDs. If I were to apply a Vietoris-Rips complexes (which is way much faster than the Tree Simplex construction, computationally speaking) to the learned filtrations of point clouds (via any set-, graph-based learning layer), would the gradients still exist?

I believe that the purpose of applying Tree Simplex construction in the sample code of TOGL is to have an injective mapping from PD features back to the corresponding nodes, so that the gradients would exist for the learnable filtration functions. But I doubt that the above statement holds true for Vietoris-Rips complexes as it mainly considers the some distance metrics (let say, a simple Euclidean metric) of input points.

Regards,
Zexian.

@Pseudomanifold
Copy link
Contributor

Dear Zexian,

Yes, if you are only interested in 0D features anyway, I'd go for Vietoris--Rips complexes, with a maximum dimension of 1. You will still get gradients with that approach (in fact, this follows our Topological Autoencoders framework nicely; the framework is also implemented in pytorch-topological now).

This is still differentiable because distances are differentiable—in the sense that you can change point positions accordingly.

Hope that helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants