You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on a project where I sample a set of n-dimensional points from a Gaussian distribution (of learnt parameters) as follows and then evaluate those points based on a loss function to update model parameters with gradient descent.
I would like to transform the Gaussian distribution for being able to sample those points from a more complex learnt distribution. In other words, the model needs to learn how to best transform points obtained from the Gaussian distribution.
I would be glad if you can suggest the best normalizing flows method (transform) to employ considering the following scalability requirements (whether or not it is available in this repo). Thank you very much in advance for your suggestion.
I am sampling 100K-dimensional points with a batch-size of 5K; hence, the scalability is crucial.
The method should be memory efficient and fast to train on a RTX series desktop Nvidia GPU.
There should not ideally be an additional regularization parameter to my current loss function.
Expressiveness of the method is not as important as scalability and robustness in the training.
The text was updated successfully, but these errors were encountered:
Hi @kayuksel. I would suggest the Neural Spline Flow line of work for a good trade-off between scalability and expressivity. I'm not sure if scaling to 100k dimensions is possible out-of-the-box. This is not really related to torchdyn, but you might want to checkout Pyro's implementation. Pyro is a robust and well-tested library for density estimation, generative modeling and probabilistic programming in general.
I am working on a project where I sample a set of n-dimensional points from a Gaussian distribution (of learnt parameters) as follows and then evaluate those points based on a loss function to update model parameters with gradient descent.
I would like to transform the Gaussian distribution for being able to sample those points from a more complex learnt distribution. In other words, the model needs to learn how to best transform points obtained from the Gaussian distribution.
I would be glad if you can suggest the best normalizing flows method (transform) to employ considering the following scalability requirements (whether or not it is available in this repo). Thank you very much in advance for your suggestion.
The text was updated successfully, but these errors were encountered: