Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add stderror and meandiff functions for TTest #151

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

pdeffebach
Copy link
Contributor

Fixes issue #150 by adding two small functions for TTest abstract types.

There are probably many more types we can define this for if we want.

It's clear that a getproperty give the user what they need, but it's good practice to wrap all of these in a function.

@nalimilan
Copy link
Member

I guess it makes sense to use stderror since it already exists, but is it really worth exporting a new meandiff function for that?

A few other ideas:

  • print the standard error in the test output?
  • add it to the docstring
  • add tests

@pdeffebach
Copy link
Contributor Author

Fair enough about the meandiff.

Should I go ahead and make all of these changes for all functions that have confint defined? If you have a confidence interval, surely you have a standard error.

@nalimilan
Copy link
Member

Should I go ahead and make all of these changes for all functions that have confint defined? If you have a confidence interval, surely you have a standard error.

If it makes sense for those tests, yes.

@pdeffebach
Copy link
Contributor Author

Now that I'm on winter break I will work on this more. I will start by making a list of all the tests and whether or not stderror is well defined.

@pdeffebach
Copy link
Contributor Author

To determine which tests should be updated I searched for confint and the following tests have confint implemented:

  • t-test: Implemented easily
  • z: Implemented easily
  • Binomial: Tough because there are multiple methods of calculating it. It's not clear to me whether the standard error is just sqrt(p * (1-p) / n) for everything and the confidence intervals are calculated differently for each test.
  • Partial Correlation test: Wikipedia tells me the standard error is just sqrt(1 / dof(test). So this is implemented.
  • Fisher's exact test: I think the Fisher's Hon-central Hypergeometric geometric distribution has a well-defined standard deviation available in Distributions.
  • Power Divergence Test: Googling tells me that this statistic doesn't have a standard error associated with it.
  • Signed Rank Test: As far as I can tell the standard error is not commonly used for this test.

@pdeffebach pdeffebach changed the base branch from master to jmw/simulation_tests January 3, 2020 23:56
@pdeffebach pdeffebach changed the base branch from jmw/simulation_tests to master January 3, 2020 23:56
@pdeffebach
Copy link
Contributor Author

I totally forgot about this pull request for over a year, but I just added tests. I think this is good to be merged.

I didn't add stderror for Fischer's exact test because I don't understand it enough. I can always add it later.

@nalimilan
Copy link
Member

Thanks. There should probably be documentation for these methods, just like what we have for confint and pvalue?

@pdeffebach
Copy link
Contributor Author

Thanks! added tests for the empty function and then edited all the implements:... docstrings in the docstrings for the types.

@pdeffebach
Copy link
Contributor Author

This PR can be merged I think. Sorry this has languished.

@pdeffebach pdeffebach changed the base branch from master to gh-pages November 4, 2020 16:06
@pdeffebach pdeffebach changed the base branch from gh-pages to master November 4, 2020 16:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants