-
-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model Performance package #8
Comments
Note for future, logo ideas could include (to keep consistency ^^) a roaring monster with either a small person measuring its height with a ruler, or the monster in front of a jury holding some panels with grades (e.g., 7 - 9 - 8). With a different colour scheme to both keep consistency in the content and theme and underline the packages distinctiveness. |
If we keep that theme, maybe the distribution in bayestestR's logo could be a ghost (inspired by this)... And report's logo could be another monster next to a black and white sketchy drawing of that same monster on a paper sheet (representing the "report" of an object on a sheet)... Alright I think you have to stop me I'm going too far haha |
However philosophically this "monster" analogy makes sense; doing stats, using R and stats models are often seen as something frightening that are too complex/hard to overcome, and the goal of easystats is to help users defeating their beasts, and help them rule the unruly models 😅 |
I also like the "monster" analogy, or something similar. A ghost would indeed fit to bayestestR. I also uses the paranormal-distribution ghost in one of my lectures... :-) |
Note for future related to the performance package, someone recently dug out an old thread related to the R2 for Bayesian models. To my knowledge, no packages implement an easy way of extracting different R2 metrics for Bayesian mixed model (and their posterior distribution). However, it seems to be feasible (hence my original question on sjtats 😅). I reckon that if we succeed in implementing such R2 metrics computation for Bayesian mixed models, it would be a very useful star feature (that could attract new users). |
Seems like the name "performance" is available on CRAN :)
This could be a good name for covering the computation of all goodness of fit metrics, i.e., whether a model is performant in its prediction of the outcome.
From a design perspective, there could be a set of functions implementing the "computation" per se of non-base indices (Tjur's, nakawaga etc.), starting with
compute_R2tjur
,compute_R2nakawaga
and a set of functions for retrieving these indices from the models,performance_R2tjur(model)
etc.Or, only one set of functions
performance_R2foo()
that would both be applicable to vectors and matrices (used for index computation), and to models (which would then retrieve the relevant things from the model and do the index computation using the former function).The text was updated successfully, but these errors were encountered: