-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
find_ (get_?) algorithm #38
Comments
Yes, would be a function that fits into insight. I bet it's less straightforward for brms-models... For which models whould this make sense? |
Especially bayesian (distinguishing between MCMC fullrank and meanfield), and for frequentist where there are customizable parameters (lme4). For fixed algorithms, we could hard code the used algorithm (for instance, "OLS" for However, my view is much more narrower than yours regarding the different packages and models, so I am not sure of the other cases of application. But I still think it's worth it to start with a few supported models, and then eventually expand it depending on time, demand and so on... |
I had especially mixed models in mind, so functions like glmmTMB, lmer, glmer, lme, mixed_model, glmmPQL?!? |
Especially the optimizers differ, I guess, not much the algorithm. |
Ok, I implemented a basic draft, but I have the feeling we should ask some mixed-models experts about what might be important to return. |
That's super cool, great work! Maybe we could post an issue on lme4 and glmmTMB to ask for confirmation and thoughts? |
I think we can take the current implementation for now, and then later check #38 (comment) more in detail. |
I believe the solution here is to differentiate between estimands, estimators, and estimation algorithms.
Now, the situation with mixed models is somewhat more complex because we start approximating things. We start by picking either a REML or an ML estimator. But calculating these things out exactly isn't feasible or desirable for some reason, so instead we come up with a new estimator that approximates the original estimator. Whether you want to think about these approximations as the same as the original estimator or a new separate thing is sort of hazy. The approximations have different properties than the original estimator, but morally they're trying to be the same thing. Anyway, if someone told me they fit a mixed model, I would want to know:
I have a paper draft that goes into much more detail that I would be happy to share if you'd like. |
@alexpghayes Thanks for the clarification! From that it seems that our
However, these are breaking changes, hence must be carefully considered and thoroughly described.
That's great, please do so :) |
I think there are lots of reasonable ways to split the functions, but I think in the end users will want to know both the estimator and the estimation algorithm. I would probably return both of these pieces of information in a list from a function
I think going be moral resemblance is very reasonable for mixed models. I like the idea of explicitly telling the user the approximation method as well.
I don't know enough about Bayes to distinguish between estimators and estimation algorithms in the MCMC world. I imagine someone from the Stan crew could clarify pretty quickly, though.
Will you shoot me an email at |
Although the fitting algorithm plays an important role, it is often unreported/uncared about. Surprisingly, its access is not really straightforward.
What do you think about a function that does that?
Here's a draft:
The text was updated successfully, but these errors were encountered: