Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: add summary/table foot_notes for method options #6633

Open
josef-pkt opened this issue Apr 8, 2020 · 3 comments
Open

ENH: add summary/table foot_notes for method options #6633

josef-pkt opened this issue Apr 8, 2020 · 3 comments

Comments

@josef-pkt
Copy link
Member

josef-pkt commented Apr 8, 2020

(I had a similar idea a long time ago, when it didn't find a positive response. It's time to add it anyways.)

Problem if we have method options, then currently results outside of models, e.g. hypothesis tests, ... don't keep information about which method was used.
This is important when the same names like statistic, std, p-value, conf_int can be computed based on different methods.

Basic idea is to add method attribute to provide information about the method used. We have this to a large extend in the top table of Results.summary. And similar, but only for warnings in the trailing text of summary.

More elaborate idea is to actually have a footnotes (foot_notes ?) attribute that contains a statement or statements similar to the trailing text in summary. The user can print it, or we can add it as trailing text, if there is a summary method.

e.g.
"Random effects are based on the <Paule-Mandel> estimate of the between variance"
"Random effect estimates are based on the <DL> estimate of the between variance with HKSJ/WLS scale corrrection to the standard errors. P-values are based on t-distribution."
#6632

This will not be relevant in library use, where we just want to get the core numbers returned

@josef-pkt
Copy link
Member Author

another idea: add an attribute info or notes to pandas DataFrame that are returned.
similar to patsy design_info

example test_results.summary_frame(**options) could attach the options used when computing summary frame, e.g. alpha for confidence interval, use_t, and other options that can override defaults, but can also include model/test options.

@josef-pkt
Copy link
Member Author

similar idea for pandas summary_frame:
return notes separately as option, e.g. return_notes=True. For the case when we have dataframe specific options.

e.g. if we include tost equivalence tests, then users need to specify equivalence margins.
example if we add classes for 1 sample and 2 sample proportions to collect various results for users #3954
It's unclear what we can put in columns to get a relatively clean rectangular dataframe.

@josef-pkt
Copy link
Member Author

related also:
PredictionResults instances currently don't hold any meta information, for example which statistic is predicted.

I could add a generic meta_info dict as optional argument to __init__ of PredictionResults classes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant