Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference #30

Closed
bensch98 opened this issue Sep 9, 2022 · 1 comment
Closed

Inference #30

bensch98 opened this issue Sep 9, 2022 · 1 comment

Comments

@bensch98
Copy link

bensch98 commented Sep 9, 2022

I'm currently looking where I can speed up my whole inference pipeline.
Is the computation of the operators at the end of the dataset.py necessary for pure inference or is it just useful for caching during training?

@nmwsharp
Copy link
Owner

Hi!

Pre-computing the spectral basis is also necessary for inference. It doesn't necessarily need to be cached/fetched (it could be computed on the fly right before evaluating the network), caching can be disabled by setting op_cache_dir=None in the get_operator()/get_all_operators() functions. But the basis still needs to be computed either way.

If the precomputation cost is a problem for you, there are a few possible workarounds:

  • If your inputs are deformations of some template mesh, just do the pre-computation once on the template mesh
  • If your data are small (<2k vertices), consider setting method='implicit_dense' in the diffusion layer. This switches to an $~O(N^3)$ dense solver, but it avoids the need to precompute a spectral basis.
  • Consider decreasing the number of eigenvectors used (k_eig). The default is 128, but for many applications you can go down to ~32 without much loss of performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants