Some two sided tests are reporting wrong p-values, hence wrong hypotesis evaluation results. According to https://support.minitab.com/en-us/minitab/18/help-and-how-to/statistics/basic-statistics/supporting-topics/basics/manually-calculate-a-p-value/ we should calculate the p-value with the following rules:
- if test is assuming a lower tail,
p-value = CDF(t_score). We show this calculation as probability.
- if test is assuming a greater tail,
p-value = 1 - CDF(t_score). We show this calculation as p_value for one_tail tests.
- if test is assuming a two sided test,
p-value = 2 * ( 1 - CDF( |t_score| ). We dont calculate the CDF of the absolute value of the t_score. We use the normal t_score. This is yielding weird results reported in the p_value field.
Some two sided tests are reporting wrong p-values, hence wrong hypotesis evaluation results. According to https://support.minitab.com/en-us/minitab/18/help-and-how-to/statistics/basic-statistics/supporting-topics/basics/manually-calculate-a-p-value/ we should calculate the p-value with the following rules:
p-value = CDF(t_score). We show this calculation asprobability.p-value = 1 - CDF(t_score). We show this calculation asp_valueforone_tailtests.p-value = 2 * ( 1 - CDF( |t_score| ). We dont calculate the CDF of the absolute value of thet_score. We use the normalt_score. This is yielding weird results reported in thep_valuefield.