You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(I had a similar idea a long time ago, when it didn't find a positive response. It's time to add it anyways.)
Problem if we have method options, then currently results outside of models, e.g. hypothesis tests, ... don't keep information about which method was used.
This is important when the same names like statistic, std, p-value, conf_int can be computed based on different methods.
Basic idea is to add method attribute to provide information about the method used. We have this to a large extend in the top table of Results.summary. And similar, but only for warnings in the trailing text of summary.
More elaborate idea is to actually have a footnotes (foot_notes ?) attribute that contains a statement or statements similar to the trailing text in summary. The user can print it, or we can add it as trailing text, if there is a summary method.
e.g.
"Random effects are based on the <Paule-Mandel> estimate of the between variance"
"Random effect estimates are based on the <DL> estimate of the between variance with HKSJ/WLS scale corrrection to the standard errors. P-values are based on t-distribution." #6632
This will not be relevant in library use, where we just want to get the core numbers returned
The text was updated successfully, but these errors were encountered:
another idea: add an attribute info or notes to pandas DataFrame that are returned.
similar to patsy design_info
example test_results.summary_frame(**options) could attach the options used when computing summary frame, e.g. alpha for confidence interval, use_t, and other options that can override defaults, but can also include model/test options.
similar idea for pandas summary_frame:
return notes separately as option, e.g. return_notes=True. For the case when we have dataframe specific options.
e.g. if we include tost equivalence tests, then users need to specify equivalence margins.
example if we add classes for 1 sample and 2 sample proportions to collect various results for users #3954
It's unclear what we can put in columns to get a relatively clean rectangular dataframe.
(I had a similar idea a long time ago, when it didn't find a positive response. It's time to add it anyways.)
Problem if we have method options, then currently results outside of models, e.g. hypothesis tests, ... don't keep information about which method was used.
This is important when the same names like statistic, std, p-value, conf_int can be computed based on different methods.
Basic idea is to add
method
attribute to provide information about the method used. We have this to a large extend in the top table ofResults.summary
. And similar, but only for warnings in the trailing text of summary.More elaborate idea is to actually have a
footnotes
(foot_notes ?) attribute that contains a statement or statements similar to the trailing text in summary. The user can print it, or we can add it as trailing text, if there is asummary
method.e.g.
"Random effects are based on the
<Paule-Mandel>
estimate of the between variance""Random effect estimates are based on the
<DL>
estimate of the between variance with HKSJ/WLS scale corrrection to the standard errors. P-values are based on t-distribution."#6632
This will not be relevant in library use, where we just want to get the core numbers returned
The text was updated successfully, but these errors were encountered: