New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement StaticSVI #1562
Implement StaticSVI #1562
Conversation
alpha_q, beta_q = torch.exp(alpha_q_log), torch.exp(beta_q_log) | ||
pyro.sample("p_latent", dist.Beta(alpha_q, beta_q)) | ||
|
||
adam = optim.Adam({"lr": .001}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also add a test using LBFGS
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fritzo I have added another test for it. Using LBFGS for this model is quite flaky.
I don't really think this is necessary (in fact, I think we should deprecate class MyModel(nn.Module):
def __init__(self, ...):
...
def model(self, batch):
....
def guide(self, batch):
...
model = MyModel(...)
elbo = pyro.infer.Trace_ELBO()
optim = torch.optim.SGD(model.parameters()) # or LBFGS etc
for batch in data:
optim.zero_grad()
elbo.loss_and_grads(model.model, model.guide, batch)
optim.step() |
@eb8680 Do you mean that we'll have a Pyro |
@fehiepsi We shouldn't try to fix leaky abstractions by introducing more indirection, we should just deprecate/remove them and do our best to help users write code that's easier to understand. |
@eb8680 I took me a while to have a feeling that I understand what you mean. :) Did you mean that we define all the parameters ahead, so we don't need to use ParamStoreDict, PyroOptim? If that is the case, then I am in with the future deprecation of these wrappers (together with SVI of course). I'll try to think about that idiom for gp module btw. But please correct me if my understanding is incorrect. About this PR, how about moving this to contrib for a while? |
Yes, I mean that if all the parameters can be defined ahead of time, we should encourage users to write their models or model/guide pairs as Here's a rewritten version of the example from your test in this PR to illustrate: x = 1 + torch.randn(10)
class MyModel(nn.Module):
def __init__(self):
mu = nn.Parameter(torch.tensor(0.))
sigma = nn.Parameter(torch.tensor(1.))
def forward(self):
with pyro.plate("plate"):
return pyro.sample("x", dist.Normal(self.mu, torch.exp(self.sigma)), obs=x)
model = MyModel()
def closure():
return Trace_ELBO().loss_and_grads(model, lambda: pass)
optim = torch.optim.LBFGS(model.parameters())
for _ in range(100):
optim.step(closure) Or version 2: x = 1 + torch.randn(10)
class MyModel(nn.Module):
def __init__(self):
mu = nn.Parameter(torch.tensor(0.))
sigma = nn.Parameter(torch.tensor(1.))
def model(self):
with pyro.plate("plate"):
return pyro.sample("x", dist.Normal(self.mu, torch.exp(self.sigma)), obs=x)
def guide(self):
pass
model = MyModel()
def closure():
return Trace_ELBO().loss_and_grads(model.model, model.guide)
optim = torch.optim.LBFGS(model.parameters())
for _ in range(100):
optim.step(closure)
I'm not sure that's the right type of thing to add to |
For that purpose, I vote to close this PR. I'll use that pattern in a GP tutorial instead. :) |
This pull request implements StaticSVI, which is an SVI interface for model & guide which does not create new params dynamically. Hence LBFGS works with this inference (address #1519).