Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More dramatic gain from partial pooling #15

Open
armanboyaci opened this issue Apr 19, 2022 · 0 comments
Open

More dramatic gain from partial pooling #15

armanboyaci opened this issue Apr 19, 2022 · 0 comments

Comments

@armanboyaci
Copy link

armanboyaci commented Apr 19, 2022

First of all thanks for the book and the video course. The motivation behind multilevel models is clear: partial pooling is an "adaptive compromise" between no pooling and complete pooling. In the video lecture-12 (https://speakerdeck.com/rmcelreath/statistical-rethinking-2022-lecture-12?slide=40) we show the "gain" of using partial pooling using "cross-validation". But the cross-validation score of partial pooling is very similar to the complete pooling.

  1. For this particular example, are there any other (more convincing?) arguments (other than cross-validation) to use partial pooling against complete pooling?
  2. Is it possible to create a "simple" example in which we observe a more "dramatic" U-shaped cross-validation line?

Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant