You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For score based diffusion models in machine learning we have a function which computes the gradient of the potential (score) that is learned. It would be really nice if it was possible for HMC and NUTS to be able to optionally provide a score_fn or a potential_fn.
I imagine this may be nice for very simple models as well, when the user may be able to just write out the gradient manually. I think this would be a pretty simple fix (an extra option at instantiation of the sampler objects). I might be able to do it myself, but I'm creating an issue here to see if someone has a smarter idea. Also if there is opposition to such an option then I won't put in the work to make a PR.
The text was updated successfully, but these errors were encountered:
Hi @ConnorStoneAstro, Assuming your nn.Module parameters are frozen, you should be able to wrap them in a custom torch.autograd.Function that you can pass as the potential_fn to Pyro's HMC:
For score based diffusion models in machine learning we have a function which computes the gradient of the potential (score) that is learned. It would be really nice if it was possible for HMC and NUTS to be able to optionally provide a
score_fn
or apotential_fn
.I imagine this may be nice for very simple models as well, when the user may be able to just write out the gradient manually. I think this would be a pretty simple fix (an extra option at instantiation of the sampler objects). I might be able to do it myself, but I'm creating an issue here to see if someone has a smarter idea. Also if there is opposition to such an option then I won't put in the work to make a PR.
The text was updated successfully, but these errors were encountered: