Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which sandbox models are “real”? #4423

Open
jbrockmendel opened this issue Apr 4, 2018 · 1 comment
Open

Which sandbox models are “real”? #4423

jbrockmendel opened this issue Apr 4, 2018 · 1 comment

Comments

@jbrockmendel
Copy link
Contributor

It would be helpful to be able to distinguish non-abandoned code (eg GMM apparently) from everything else.

@josef-pkt
Copy link
Member

I have no overview.
Several functions and classes are used outside of sandbox. Those should mostly have good unit tests.
GMM is (should be) the only large advertised module and model.
Some are superseded by PRs that have not been merged, e.g. 2 GSOC for sysreg and nonlinearLS

sandbox.distribution was in reasonably good shape, I had worked on it on and off for several years, but has low or mixed unit test coverage and it's not clear which functions should become public or what the API should be. I also have an unfinished PR to refactor GOF tests. Some of it has been superseded by enhancements in scipy.stats.distributions since I wrote the sandbox versions.

Some of the sandbox code were early experiments and has become obsolete with development in statsmodels "core".
There are still a few modules that pre-date statsmodels, which might be obsolete or could provide hints for enhancements.
The rest is mostly unfinished code including helper functions for enhancements that we don't have yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants