Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent loss values with & without vectorization #414

Closed
hawkrobe opened this issue Oct 30, 2017 · 5 comments
Closed

Inconsistent loss values with & without vectorization #414

hawkrobe opened this issue Oct 30, 2017 · 5 comments

Comments

@hawkrobe
Copy link

Consider the vectorized and non-vectorized versions of the same hierarchical regression model. The main differences are that the vectorized version:

  • uses iarange instead of irange
  • uses a single observe from a 1 x batch_size dimensional distribution instead of batch_size separate observes from 1 dimensional distributions
  • optimizes 1 x k dimensional params for a single intercepts sample site instead of k separate 1 dimensional params for k sample sites

Intuitively, I'd expect these to converge to the same loss; instead the vectorized version converges to ~1900 and the non-vectorized version converges to ~600. This same model written in webppl also converges to about ~600, so this might indicate an issue with scaling in the vectorized version?

Incidentally, the mean-field guide sigmas also converge to different values in the vectorized version although the mu point estimates are the same. The unvectorized version matches webppl but the vectorized version has much higher certainty.

It's of course very plausible that there's a bug in my implementation of the vectorized model!

@fritzo fritzo self-assigned this Oct 30, 2017
@fritzo fritzo added the bug label Oct 30, 2017
@fritzo
Copy link
Member

fritzo commented Oct 30, 2017

Thanks for the report and the reproducible example! I'm looking into it.

@hawkrobe
Copy link
Author

@fritzo : thanks! @jpchen has also been working with these models a fair amount and may have thoughts as well.

@jpchen
Copy link
Member

jpchen commented Oct 30, 2017

yep working wit @fritzo on this

@fritzo
Copy link
Member

fritzo commented Oct 31, 2017

After some deep diving with @jpchen, we found that your model and guide that had different parameter shapes at the pyro.sample('intercept', Normal(...)) site. The fix is to change your model parameter shapes

  subj_bias = pyro.sample('intercepts',
-                         Normal(b0.expand(num_subjects),
-                                sigma_subj.expand(num_subjects)))
+                         Normal(b0.expand(num_subjects, 1),
+                                sigma_subj.expand(num_subjects, 1)))

Sorry for such a difficult-to-diagnose error. We are adding an error message for this case so that debugging will be easier in the future (see #303). Let us know if this works for you.

@fritzo fritzo closed this as completed Oct 31, 2017
@hawkrobe
Copy link
Author

hawkrobe commented Oct 31, 2017

@fritzo @jpchen : wow, that's super subtle (and a really surprising consequence -- I would've expected it to either throw an error or noticeably mess the whole thing up instead of just making the loss/uncertainty converge to different numbers!)

Thanks for taking the time to diagnose, and glad it's not a deeper issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants