-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve HSGP and ZeroInflated / Hurdle distributions docs #7189
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #7189 +/- ##
==========================================
- Coverage 92.26% 91.84% -0.43%
==========================================
Files 100 100
Lines 16880 16880
==========================================
- Hits 15574 15503 -71
- Misses 1306 1377 +71
|
Failing test seems completely unrelated. Should I just rerun it? |
# The centered approximation can be more efficient when | ||
# the GP is stronger than the noise | ||
# beta = pm.Normal("beta", sigma=sqrt_psd, size=gp._m_star) | ||
# f = pm.Deterministic("f", phi @ beta) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity: is this something empirical, or is there a general statement about it? :)
Empirical on my side, but I know Bill also told me that.
I'm guessing this is similar to the fact that the centered Normal
parametrization works better in a hierarchical model for groups which have
a lot of data
El El lun, 11 mar 2024 a la(s) 17:09, Juan Orduz ***@***.***>
escribió:
… ***@***.**** commented on this pull request.
------------------------------
In pymc/gp/hsgp_approx.py
<#7189 (comment)>:
> + # The centered approximation can be more efficient when
+ # the GP is stronger than the noise
+ # beta = pm.Normal("beta", sigma=sqrt_psd, size=gp._m_star)
+ # f = pm.Deterministic("f", phi @ beta)
Out of curiosity: is this something empirical, or is there a general
statement about it? :)
—
Reply to this email directly, view it on GitHub
<#7189 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHIJMTD5QI4GU7JWS4D4X7TYXYFNZAVCNFSM6AAAAABEPONYL6VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMYTSMRZGEYDEOJTGY>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If someone disagrees with the centered thing let us know so we can revert.
@AlexAndorra small nitpick, would have been better to have two separate commits for the two unrelated changes (Mixture, HSGP) |
True. Noted for next time @ricardoV94 |
Description
Just a small PR to improve and fix some typos in the doc pages of:
prior_linearized
method)ZeroInflated
distributions (switched from "variates" to "draws", which is more common and much clearer when teaching)Hurdle
distributions (same change as previous point)Ready for review and merge
Checklist
Type of change
📚 Documentation preview 📚: https://pymc--7189.org.readthedocs.build/en/7189/