You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently looking where I can speed up my whole inference pipeline.
Is the computation of the operators at the end of the dataset.py necessary for pure inference or is it just useful for caching during training?
The text was updated successfully, but these errors were encountered:
Pre-computing the spectral basis is also necessary for inference. It doesn't necessarily need to be cached/fetched (it could be computed on the fly right before evaluating the network), caching can be disabled by setting op_cache_dir=None in the get_operator()/get_all_operators() functions. But the basis still needs to be computed either way.
If the precomputation cost is a problem for you, there are a few possible workarounds:
If your inputs are deformations of some template mesh, just do the pre-computation once on the template mesh
If your data are small (<2k vertices), consider setting method='implicit_dense' in the diffusion layer. This switches to an $~O(N^3)$ dense solver, but it avoids the need to precompute a spectral basis.
Consider decreasing the number of eigenvectors used (k_eig). The default is 128, but for many applications you can go down to ~32 without much loss of performance.
I'm currently looking where I can speed up my whole inference pipeline.
Is the computation of the operators at the end of the dataset.py necessary for pure inference or is it just useful for caching during training?
The text was updated successfully, but these errors were encountered: