-
Notifications
You must be signed in to change notification settings - Fork 10
How to compile the results?
For most analyses, preferably the following components should be compiled:
- General information
- Name of the analysis
- The variables and their measurement levels involved in the analysis
- Preconditions of the analysis, if any of them are violated (see the additional docstring in cogstat.py)
- CogStat should handle all variables with any measurement levels. When this is not possible, then add a message that the analysis is not available. If the analysis does not make sense with the chosen variable with the given measurement type, then that should be mentioned in the Preconditions above (in the GUI version, in the longer term, these settings should disable the OK button in the dialog). If the analysis makes sense but is unavailable in CogStat, add an explicit message that the analysis is not implemented in CogStat yet.
- Raw data
- (If several variables are given (e.g., several grouping variables, repeated measures variables), keep the order of those variables in the output)
- (Display the group levels alphabetically)
- Number of observed and missing cases. Missing cases are dropped here, and only observed cases are used in the rest of the analysis.
- Graphically display raw data without any additional information
- Sample properties
- (See the variable/value order-related viewpoints in the Raw data section)
- Descriptives numerically
- Add standardized effect sizes here
- Graph displaying the descriptives
- Add raw data - Graph with individual data
- Population properties
- (See the variable/value order-related viewpoints in the Raw data section)
- Assumptions
- Interval estimates and hypothesis tests may require assumptions to be met. One possibility is to include those assumption checks here, or they could be included in the specific estimation or test part. If the assumptions are common, then it may be more parsimonious to include only once in this subsection.
- When the assumptions are violated, but an alternative solution is not available (either in CogStat or, more generally, in the literature), add a warning message that the inferential statistics may be biased.
- In fact, all assumptions can be properties of the sample/population that may be of interest in themselves. For example, differences in the standard deviation of the groups or normality of variables can be essential in themselves in some cases. From that viewpoint, it wouldn't make sense to have an assumption subsection, but different properties should be listed, where some properties may be assumptions for some other properties' calculation methods. On the other hand, most research/researchers are interested only in the common properties, e.g., difference of the means. To reflect these latter viewpoints and make the output similar to what can be seen in papers, textbooks, etc., assumptions are handled in a separate subsection of the CogStat output at the moment (but we may change this in later releases).
- (Assumptions are not relevant to the properties of the sample since those indexes can be interpreted in a more flexible way.)
- Point and interval estimations (confidence interval or credible interval, issue #20, issue #28)
- Display them in a table: point and interval estimations are the columns, and parameters are the rows.
- Add standardized effect sizes here
- Graphs displaying the population estimations
- Hypothesis test results with checking the appropriate assumptions
- Hypothesis test should follow interval estimations because, most probably, interval estimations can be interpreted more easily
- Sensitivity power analysis: what is the effect size that has appropriate power with the current sample size? (issue #120)
- Use the following steps:
- First, explicitly state what property/situation is tested, forming the null hypothesis possibly in everyday terms
- Second, specify what test will be used or what tests could be used, depending on the assumptions. Print the main steps and reasons (variable type, assumptions, etc.) why the specific test was chosen
- Third, if needed, run the assumption check(s). Be explicit about what belongs to the assumption check (issue #75).
- Fourth, if an assumption check was applied, summarize the assumptions and print the test name.
- Fifth, print the test result.
- Sixth, if a post-hoc test is needed/available, print the post-hoc test name and the results.
- APA format When relevant, results should be in APA format
- Tables When possible, and when it gives a denser presentation, tables should be used
-
Precision
- Descriptive data and parameter estimations The precision defined as the decimal places (number of digits to the right of the decimal point) of the results depends on the precision of the imported data. Usually, the decimal places of the results are the decimal places of the source data plus one. This is in line with the recommendation, "Do not report statistics to a greater precision than is supported by your data simply because they are printed that way by the program." (Wilkinson, 1999, para. 47.) because CogStat does not display precision that is not supported by the data.
- When the nature of the variable is known, specific precision could be used independent of the data.
- For p-values, in line with the APA style, if the value >= 0.001, then the value is displayed with three decimal places without leading zero, otherwise, p < .001.
- Test values of hypothesis test, correlations with two decimal places. TBA standardized effect sizes in general, reliability indexes, etc.
- In behavioral data diffusion analysis, three decimals are used in error rate, reaction time and diffusion parameter values.
- Table row name sorting In pivot tables (including behavioral data diffusion analysis), row names should be ordered in a case-insensitive way to be consistent with spreadsheet software packages.
- We should prefer effect sizes that are in line with our effect size results. If several effect sizes are calculated elsewhere in CogStat, sensitivity analysis may include several effect sizes, too.
- Effect sizes that are used in other popular packages (e.g., in G*Power) could also be added.
- Note that statsmodels may not always find the effect size.
- For sensitivity power analyses, Python modules offer various solutions.
- Missing power analyses: #120
These viewpoints apply to both figures and relevant tables.
- Group levels should be sorted alphabetically.
- Levels of repeated measures factors should follow the order of the levels as specified by the user.
Some analyses might be slow to run (bootstrapping, some of the diffusion analyses, etc.). Some descriptions note that faster solutions might be preferred. From the viewpoint of published results, the most precise available solution should be preferred. (A research project takes weeks or, more frequently, months or even years to be completed and then published. In this time frame, even the relatively slow data processing solutions are rather fast (usually taking minutes or occasionally hours to run). So after spending some months with work, a researcher probably wouldn't want to lose quality to save a few minutes.) Still, in some exploratory phases of the research, faster but less precise solutions might be reasonable options to choose from. But these faster and more imprecise results should be used only for a first fast check and not for publications.
- For this reason, CogStat may provide two solutions. A "fast preliminary" version and a "slow precise" version
- The user might choose between them on the appropriate dialog with a checkbox (or with another simple solution).
- Because CogStat offers a fast way to check many aspects of the data, the "fast preliminary" option could be the default option.
- When running the fast preliminary version, the output should always warn the user that "This is a fast preliminary calculation. Use the Precise option for calculating the to be published results."
Usually, best practice solutions are preferred for the CogStat pipeline. However, in some cases, it might make sense to use less ideal solutions when those solutions are widespread and/or a transition to a better solution may not be smooth. In those cases, both the recommended and the widespread methods could be used. In those cases, the output should warn the user that these suboptimal solutions should be used only for comparing the present result with the former results.
There are several indices, details, etc., that can be calculated and, in fact, are calculated in other packages but are less useful in the sense that alternative indices are easier to interpret or use. For example, skewness and kurtosis are mostly not interesting for themselves but are used to estimate whether a variable is normally distributed, but for the latter aim, there are better tools (e.g., hypothesis tests that may also consider the variation of the relevant statistics depending on the sample size). As another example, the standard error is usually not interesting in itself but is, for example, a tool to find an interval estimate; for the latter aim, it is better to check the interval estimate directly. In those cases, these indices shouldn't be displayed because there are better alternatives to reach the relevant information.
Usually, nominal and ordinal terms are used for these two measurement levels.
However, for a third level, interval, scale (e.g., in SPSS), or continuous (e.g., in jamovi) terms are also used. CogStat uses the interval term.
- Scale is not recommended because it might be ambiguous, also meaning measurement tools.
- Continuous is imprecise because there could be interval scales that are discrete. In fact, the term is orthogonal to the measurement level, even if the two dimensions correlate.
- There are ratio level variables, but in practice, they are usually handled as interval variables, and CogStat (similar to other statistical software packages) does not offer ratio-specific statistical calculations.
CogStat calculates some statistics for interval variables that are seemingly ordinal statistics, e.g., median or Spearman correlation coefficient. This is appropriate because first, a variable can be handled as a lower measurement level variable, e.g., an interval variable can be handled as an ordinal or nominal variable, although in those cases, we drop some part of the information the data includes. Second, ordinal statistics have some attractive features, such as not being sensitive to outliers, not being sensitive to violation of normality, etc.
Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54(8), 594–604. https://doi.org/10.1037/0003-066X.54.8.594
Statistical and methodological documentation
- How to contribute?
- Automatic analysis pipeline
- How to compile the results?
- Methodological considerations behind CogStat - preprint paper
- CogStat function plan template (limited access)
- Creating instructional materials
- Data analysis and statistics resources
Documentation for developers