Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Further performance metrics to add #36

Open
strengejacke opened this issue Apr 8, 2019 · 5 comments
Open

Further performance metrics to add #36

strengejacke opened this issue Apr 8, 2019 · 5 comments
Labels
Feature idea 🔥 New feature or request

Comments

@strengejacke
Copy link
Member

This would definitely fit.

Other things that come to my mind when I think of the scope:

  • Other indices used in the structural equation field (this bunch of guys). As they are computed by default by lavaan, it would mostly consist of extractors consistent with the easyverse.
  • Convenience methods for PCAs / Factor Analysis, returning the % of variance explained.
  • I think that in general, and in the future, the easyverse (that is a thing now :) methods could be useful in the machine learning world, where people struggle with different models/packages. Providing a unifying syntax for extracting, interpreting and understanding their models could be quite appreciated. Although this will probably wait for the help of a future contributor, expert in this kind of things. But providing the tools to bridge the regression world with the ML world could be quite cool.

Originally posted by @DominiqueMakowski in #14 (comment)

@strengejacke
Copy link
Member Author

All the *_performance functions computes in-sample based measures. From my point of view the word "performance" suggests how well the model works
out-of-sample. Would it be possible to add out-of-sample measures like RMSE, MAE, out-of-sample R^2, and for classification: AUC, precision, recall +++

@bwiernik
Copy link
Contributor

I have formulas for all of the SEM fit indices under the sun. I can take that.

@bwiernik
Copy link
Contributor

bwiernik commented Apr 13, 2021

Regarding out of sample stuff, I would also love if performance could do various true cross-validation stuff, both individual residuals and aggregates like RMSE, R^2^, and deviance. Those are currently strewn across various packages.

@bwiernik
Copy link
Contributor

bwiernik commented Apr 13, 2021

Regarding PCA/factor analysis, to meet most user expectations, that's going to require at least a handful of dependencies like GPArotation. Various nuances of rotations and extraction methods are pretty fiddly, but also already well-implemented in psych and fungible. I would suggest these functions should be wrappers, probably starting with the princomp, factanal, and psych::fa() and psych::fa.parallel() functions.

@bwiernik
Copy link
Contributor

Providing a unifying syntax for extracting, interpreting and understanding their models could be quite appreciated. Although this will probably wait for the help of a future contributor, expert in this kind of things.

Isn't that essentially the aim of tidymodels?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature idea 🔥 New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants