-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
/
_hypotests.py
2016 lines (1647 loc) · 76.8 KB
/
_hypotests.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
from collections import namedtuple
from dataclasses import dataclass
from math import comb
import numpy as np
import warnings
from itertools import combinations
import scipy.stats
from scipy.optimize import shgo
from . import distributions
from ._common import ConfidenceInterval
from ._continuous_distns import chi2, norm
from scipy.special import gamma, kv, gammaln
from scipy.fft import ifft
from ._stats_pythran import _a_ij_Aij_Dij2
from ._stats_pythran import (
_concordant_pairs as _P, _discordant_pairs as _Q
)
from scipy.stats import _stats_py
__all__ = ['epps_singleton_2samp', 'cramervonmises', 'somersd',
'barnard_exact', 'boschloo_exact', 'cramervonmises_2samp',
'tukey_hsd', 'poisson_means_test']
Epps_Singleton_2sampResult = namedtuple('Epps_Singleton_2sampResult',
('statistic', 'pvalue'))
def epps_singleton_2samp(x, y, t=(0.4, 0.8)):
"""Compute the Epps-Singleton (ES) test statistic.
Test the null hypothesis that two samples have the same underlying
probability distribution.
Parameters
----------
x, y : array-like
The two samples of observations to be tested. Input must not have more
than one dimension. Samples can have different lengths.
t : array-like, optional
The points (t1, ..., tn) where the empirical characteristic function is
to be evaluated. It should be positive distinct numbers. The default
value (0.4, 0.8) is proposed in [1]_. Input must not have more than
one dimension.
Returns
-------
statistic : float
The test statistic.
pvalue : float
The associated p-value based on the asymptotic chi2-distribution.
See Also
--------
ks_2samp, anderson_ksamp
Notes
-----
Testing whether two samples are generated by the same underlying
distribution is a classical question in statistics. A widely used test is
the Kolmogorov-Smirnov (KS) test which relies on the empirical
distribution function. Epps and Singleton introduce a test based on the
empirical characteristic function in [1]_.
One advantage of the ES test compared to the KS test is that is does
not assume a continuous distribution. In [1]_, the authors conclude
that the test also has a higher power than the KS test in many
examples. They recommend the use of the ES test for discrete samples as
well as continuous samples with at least 25 observations each, whereas
`anderson_ksamp` is recommended for smaller sample sizes in the
continuous case.
The p-value is computed from the asymptotic distribution of the test
statistic which follows a `chi2` distribution. If the sample size of both
`x` and `y` is below 25, the small sample correction proposed in [1]_ is
applied to the test statistic.
The default values of `t` are determined in [1]_ by considering
various distributions and finding good values that lead to a high power
of the test in general. Table III in [1]_ gives the optimal values for
the distributions tested in that study. The values of `t` are scaled by
the semi-interquartile range in the implementation, see [1]_.
References
----------
.. [1] T. W. Epps and K. J. Singleton, "An omnibus test for the two-sample
problem using the empirical characteristic function", Journal of
Statistical Computation and Simulation 26, p. 177--203, 1986.
.. [2] S. J. Goerg and J. Kaiser, "Nonparametric testing of distributions
- the Epps-Singleton two-sample test using the empirical characteristic
function", The Stata Journal 9(3), p. 454--465, 2009.
"""
x, y, t = np.asarray(x), np.asarray(y), np.asarray(t)
# check if x and y are valid inputs
if x.ndim > 1:
raise ValueError(f'x must be 1d, but x.ndim equals {x.ndim}.')
if y.ndim > 1:
raise ValueError(f'y must be 1d, but y.ndim equals {y.ndim}.')
nx, ny = len(x), len(y)
if (nx < 5) or (ny < 5):
raise ValueError('x and y should have at least 5 elements, but len(x) '
'= {} and len(y) = {}.'.format(nx, ny))
if not np.isfinite(x).all():
raise ValueError('x must not contain nonfinite values.')
if not np.isfinite(y).all():
raise ValueError('y must not contain nonfinite values.')
n = nx + ny
# check if t is valid
if t.ndim > 1:
raise ValueError(f't must be 1d, but t.ndim equals {t.ndim}.')
if np.less_equal(t, 0).any():
raise ValueError('t must contain positive elements only.')
# rescale t with semi-iqr as proposed in [1]; import iqr here to avoid
# circular import
from scipy.stats import iqr
sigma = iqr(np.hstack((x, y))) / 2
ts = np.reshape(t, (-1, 1)) / sigma
# covariance estimation of ES test
gx = np.vstack((np.cos(ts*x), np.sin(ts*x))).T # shape = (nx, 2*len(t))
gy = np.vstack((np.cos(ts*y), np.sin(ts*y))).T
cov_x = np.cov(gx.T, bias=True) # the test uses biased cov-estimate
cov_y = np.cov(gy.T, bias=True)
est_cov = (n/nx)*cov_x + (n/ny)*cov_y
est_cov_inv = np.linalg.pinv(est_cov)
r = np.linalg.matrix_rank(est_cov_inv)
if r < 2*len(t):
warnings.warn('Estimated covariance matrix does not have full rank. '
'This indicates a bad choice of the input t and the '
'test might not be consistent.') # see p. 183 in [1]_
# compute test statistic w distributed asympt. as chisquare with df=r
g_diff = np.mean(gx, axis=0) - np.mean(gy, axis=0)
w = n*np.dot(g_diff.T, np.dot(est_cov_inv, g_diff))
# apply small-sample correction
if (max(nx, ny) < 25):
corr = 1.0/(1.0 + n**(-0.45) + 10.1*(nx**(-1.7) + ny**(-1.7)))
w = corr * w
p = chi2.sf(w, r)
return Epps_Singleton_2sampResult(w, p)
def poisson_means_test(k1, n1, k2, n2, *, diff=0, alternative='two-sided'):
r"""
Performs the Poisson means test, AKA the "E-test".
This is a test of the null hypothesis that the difference between means of
two Poisson distributions is `diff`. The samples are provided as the
number of events `k1` and `k2` observed within measurement intervals
(e.g. of time, space, number of observations) of sizes `n1` and `n2`.
Parameters
----------
k1 : int
Number of events observed from distribution 1.
n1: float
Size of sample from distribution 1.
k2 : int
Number of events observed from distribution 2.
n2 : float
Size of sample from distribution 2.
diff : float, default=0
The hypothesized difference in means between the distributions
underlying the samples.
alternative : {'two-sided', 'less', 'greater'}, optional
Defines the alternative hypothesis.
The following options are available (default is 'two-sided'):
* 'two-sided': the difference between distribution means is not
equal to `diff`
* 'less': the difference between distribution means is less than
`diff`
* 'greater': the difference between distribution means is greater
than `diff`
Returns
-------
statistic : float
The test statistic (see [1]_ equation 3.3).
pvalue : float
The probability of achieving such an extreme value of the test
statistic under the null hypothesis.
Notes
-----
Let:
.. math:: X_1 \sim \mbox{Poisson}(\mathtt{n1}\lambda_1)
be a random variable independent of
.. math:: X_2 \sim \mbox{Poisson}(\mathtt{n2}\lambda_2)
and let ``k1`` and ``k2`` be the observed values of :math:`X_1`
and :math:`X_2`, respectively. Then `poisson_means_test` uses the number
of observed events ``k1`` and ``k2`` from samples of size ``n1`` and
``n2``, respectively, to test the null hypothesis that
.. math::
H_0: \lambda_1 - \lambda_2 = \mathtt{diff}
A benefit of the E-test is that it has good power for small sample sizes,
which can reduce sampling costs [1]_. It has been evaluated and determined
to be more powerful than the comparable C-test, sometimes referred to as
the Poisson exact test.
References
----------
.. [1] Krishnamoorthy, K., & Thomson, J. (2004). A more powerful test for
comparing two Poisson means. Journal of Statistical Planning and
Inference, 119(1), 23-35.
.. [2] Przyborowski, J., & Wilenski, H. (1940). Homogeneity of results in
testing samples from Poisson series: With an application to testing
clover seed for dodder. Biometrika, 31(3/4), 313-323.
Examples
--------
Suppose that a gardener wishes to test the number of dodder (weed) seeds
in a sack of clover seeds that they buy from a seed company. It has
previously been established that the number of dodder seeds in clover
follows the Poisson distribution.
A 100 gram sample is drawn from the sack before being shipped to the
gardener. The sample is analyzed, and it is found to contain no dodder
seeds; that is, `k1` is 0. However, upon arrival, the gardener draws
another 100 gram sample from the sack. This time, three dodder seeds are
found in the sample; that is, `k2` is 3. The gardener would like to
know if the difference is significant and not due to chance. The
null hypothesis is that the difference between the two samples is merely
due to chance, or that :math:`\lambda_1 - \lambda_2 = \mathtt{diff}`
where :math:`\mathtt{diff} = 0`. The alternative hypothesis is that the
difference is not due to chance, or :math:`\lambda_1 - \lambda_2 \ne 0`.
The gardener selects a significance level of 5% to reject the null
hypothesis in favor of the alternative [2]_.
>>> import scipy.stats as stats
>>> res = stats.poisson_means_test(0, 100, 3, 100)
>>> res.statistic, res.pvalue
(-1.7320508075688772, 0.08837900929018157)
The p-value is .088, indicating a near 9% chance of observing a value of
the test statistic under the null hypothesis. This exceeds 5%, so the
gardener does not reject the null hypothesis as the difference cannot be
regarded as significant at this level.
"""
_poisson_means_test_iv(k1, n1, k2, n2, diff, alternative)
# "for a given k_1 and k_2, an estimate of \lambda_2 is given by" [1] (3.4)
lmbd_hat2 = ((k1 + k2) / (n1 + n2) - diff * n1 / (n1 + n2))
# "\hat{\lambda_{2k}} may be less than or equal to zero ... and in this
# case the null hypothesis cannot be rejected ... [and] it is not necessary
# to compute the p-value". [1] page 26 below eq. (3.6).
if lmbd_hat2 <= 0:
return _stats_py.SignificanceResult(0, 1)
# The unbiased variance estimate [1] (3.2)
var = k1 / (n1 ** 2) + k2 / (n2 ** 2)
# The _observed_ pivot statistic from the input. It follows the
# unnumbered equation following equation (3.3) This is used later in
# comparison with the computed pivot statistics in an indicator function.
t_k1k2 = (k1 / n1 - k2 / n2 - diff) / np.sqrt(var)
# Equation (3.5) of [1] is lengthy, so it is broken into several parts,
# beginning here. Note that the probability mass function of poisson is
# exp^(-\mu)*\mu^k/k!, so and this is called with shape \mu, here noted
# here as nlmbd_hat*. The strategy for evaluating the double summation in
# (3.5) is to create two arrays of the values of the two products inside
# the summation and then broadcast them together into a matrix, and then
# sum across the entire matrix.
# Compute constants (as seen in the first and second separated products in
# (3.5).). (This is the shape (\mu) parameter of the poisson distribution.)
nlmbd_hat1 = n1 * (lmbd_hat2 + diff)
nlmbd_hat2 = n2 * lmbd_hat2
# Determine summation bounds for tail ends of distribution rather than
# summing to infinity. `x1*` is for the outer sum and `x2*` is the inner
# sum.
x1_lb, x1_ub = distributions.poisson.ppf([1e-10, 1 - 1e-16], nlmbd_hat1)
x2_lb, x2_ub = distributions.poisson.ppf([1e-10, 1 - 1e-16], nlmbd_hat2)
# Construct arrays to function as the x_1 and x_2 counters on the summation
# in (3.5). `x1` is in columns and `x2` is in rows to allow for
# broadcasting.
x1 = np.arange(x1_lb, x1_ub + 1)
x2 = np.arange(x2_lb, x2_ub + 1)[:, None]
# These are the two products in equation (3.5) with `prob_x1` being the
# first (left side) and `prob_x2` being the second (right side). (To
# make as clear as possible: the 1st contains a "+ d" term, the 2nd does
# not.)
prob_x1 = distributions.poisson.pmf(x1, nlmbd_hat1)
prob_x2 = distributions.poisson.pmf(x2, nlmbd_hat2)
# compute constants for use in the "pivot statistic" per the
# unnumbered equation following (3.3).
lmbd_x1 = x1 / n1
lmbd_x2 = x2 / n2
lmbds_diff = lmbd_x1 - lmbd_x2 - diff
var_x1x2 = lmbd_x1 / n1 + lmbd_x2 / n2
# This is the 'pivot statistic' for use in the indicator of the summation
# (left side of "I[.]").
with np.errstate(invalid='ignore', divide='ignore'):
t_x1x2 = lmbds_diff / np.sqrt(var_x1x2)
# `[indicator]` implements the "I[.] ... the indicator function" per
# the paragraph following equation (3.5).
if alternative == 'two-sided':
indicator = np.abs(t_x1x2) >= np.abs(t_k1k2)
elif alternative == 'less':
indicator = t_x1x2 <= t_k1k2
else:
indicator = t_x1x2 >= t_k1k2
# Multiply all combinations of the products together, exclude terms
# based on the `indicator` and then sum. (3.5)
pvalue = np.sum((prob_x1 * prob_x2)[indicator])
return _stats_py.SignificanceResult(t_k1k2, pvalue)
def _poisson_means_test_iv(k1, n1, k2, n2, diff, alternative):
# """check for valid types and values of input to `poisson_mean_test`."""
if k1 != int(k1) or k2 != int(k2):
raise TypeError('`k1` and `k2` must be integers.')
count_err = '`k1` and `k2` must be greater than or equal to 0.'
if k1 < 0 or k2 < 0:
raise ValueError(count_err)
if n1 <= 0 or n2 <= 0:
raise ValueError('`n1` and `n2` must be greater than 0.')
if diff < 0:
raise ValueError('diff must be greater than or equal to 0.')
alternatives = {'two-sided', 'less', 'greater'}
if alternative.lower() not in alternatives:
raise ValueError(f"Alternative must be one of '{alternatives}'.")
class CramerVonMisesResult:
def __init__(self, statistic, pvalue):
self.statistic = statistic
self.pvalue = pvalue
def __repr__(self):
return (f"{self.__class__.__name__}(statistic={self.statistic}, "
f"pvalue={self.pvalue})")
def _psi1_mod(x):
"""
psi1 is defined in equation 1.10 in Csörgő, S. and Faraway, J. (1996).
This implements a modified version by excluding the term V(x) / 12
(here: _cdf_cvm_inf(x) / 12) to avoid evaluating _cdf_cvm_inf(x)
twice in _cdf_cvm.
Implementation based on MAPLE code of Julian Faraway and R code of the
function pCvM in the package goftest (v1.1.1), permission granted
by Adrian Baddeley. Main difference in the implementation: the code
here keeps adding terms of the series until the terms are small enough.
"""
def _ed2(y):
z = y**2 / 4
b = kv(1/4, z) + kv(3/4, z)
return np.exp(-z) * (y/2)**(3/2) * b / np.sqrt(np.pi)
def _ed3(y):
z = y**2 / 4
c = np.exp(-z) / np.sqrt(np.pi)
return c * (y/2)**(5/2) * (2*kv(1/4, z) + 3*kv(3/4, z) - kv(5/4, z))
def _Ak(k, x):
m = 2*k + 1
sx = 2 * np.sqrt(x)
y1 = x**(3/4)
y2 = x**(5/4)
e1 = m * gamma(k + 1/2) * _ed2((4 * k + 3)/sx) / (9 * y1)
e2 = gamma(k + 1/2) * _ed3((4 * k + 1) / sx) / (72 * y2)
e3 = 2 * (m + 2) * gamma(k + 3/2) * _ed3((4 * k + 5) / sx) / (12 * y2)
e4 = 7 * m * gamma(k + 1/2) * _ed2((4 * k + 1) / sx) / (144 * y1)
e5 = 7 * m * gamma(k + 1/2) * _ed2((4 * k + 5) / sx) / (144 * y1)
return e1 + e2 + e3 + e4 + e5
x = np.asarray(x)
tot = np.zeros_like(x, dtype='float')
cond = np.ones_like(x, dtype='bool')
k = 0
while np.any(cond):
z = -_Ak(k, x[cond]) / (np.pi * gamma(k + 1))
tot[cond] = tot[cond] + z
cond[cond] = np.abs(z) >= 1e-7
k += 1
return tot
def _cdf_cvm_inf(x):
"""
Calculate the cdf of the Cramér-von Mises statistic (infinite sample size).
See equation 1.2 in Csörgő, S. and Faraway, J. (1996).
Implementation based on MAPLE code of Julian Faraway and R code of the
function pCvM in the package goftest (v1.1.1), permission granted
by Adrian Baddeley. Main difference in the implementation: the code
here keeps adding terms of the series until the terms are small enough.
The function is not expected to be accurate for large values of x, say
x > 4, when the cdf is very close to 1.
"""
x = np.asarray(x)
def term(x, k):
# this expression can be found in [2], second line of (1.3)
u = np.exp(gammaln(k + 0.5) - gammaln(k+1)) / (np.pi**1.5 * np.sqrt(x))
y = 4*k + 1
q = y**2 / (16*x)
b = kv(0.25, q)
return u * np.sqrt(y) * np.exp(-q) * b
tot = np.zeros_like(x, dtype='float')
cond = np.ones_like(x, dtype='bool')
k = 0
while np.any(cond):
z = term(x[cond], k)
tot[cond] = tot[cond] + z
cond[cond] = np.abs(z) >= 1e-7
k += 1
return tot
def _cdf_cvm(x, n=None):
"""
Calculate the cdf of the Cramér-von Mises statistic for a finite sample
size n. If N is None, use the asymptotic cdf (n=inf).
See equation 1.8 in Csörgő, S. and Faraway, J. (1996) for finite samples,
1.2 for the asymptotic cdf.
The function is not expected to be accurate for large values of x, say
x > 2, when the cdf is very close to 1 and it might return values > 1
in that case, e.g. _cdf_cvm(2.0, 12) = 1.0000027556716846. Moreover, it
is not accurate for small values of n, especially close to the bounds of
the distribution's domain, [1/(12*n), n/3], where the value jumps to 0
and 1, respectively. These are limitations of the approximation by Csörgő
and Faraway (1996) implemented in this function.
"""
x = np.asarray(x)
if n is None:
y = _cdf_cvm_inf(x)
else:
# support of the test statistic is [12/n, n/3], see 1.1 in [2]
y = np.zeros_like(x, dtype='float')
sup = (1./(12*n) < x) & (x < n/3.)
# note: _psi1_mod does not include the term _cdf_cvm_inf(x) / 12
# therefore, we need to add it here
y[sup] = _cdf_cvm_inf(x[sup]) * (1 + 1./(12*n)) + _psi1_mod(x[sup]) / n
y[x >= n/3] = 1
if y.ndim == 0:
return y[()]
return y
def cramervonmises(rvs, cdf, args=()):
"""Perform the one-sample Cramér-von Mises test for goodness of fit.
This performs a test of the goodness of fit of a cumulative distribution
function (cdf) :math:`F` compared to the empirical distribution function
:math:`F_n` of observed random variates :math:`X_1, ..., X_n` that are
assumed to be independent and identically distributed ([1]_).
The null hypothesis is that the :math:`X_i` have cumulative distribution
:math:`F`.
Parameters
----------
rvs : array_like
A 1-D array of observed values of the random variables :math:`X_i`.
cdf : str or callable
The cumulative distribution function :math:`F` to test the
observations against. If a string, it should be the name of a
distribution in `scipy.stats`. If a callable, that callable is used
to calculate the cdf: ``cdf(x, *args) -> float``.
args : tuple, optional
Distribution parameters. These are assumed to be known; see Notes.
Returns
-------
res : object with attributes
statistic : float
Cramér-von Mises statistic.
pvalue : float
The p-value.
See Also
--------
kstest, cramervonmises_2samp
Notes
-----
.. versionadded:: 1.6.0
The p-value relies on the approximation given by equation 1.8 in [2]_.
It is important to keep in mind that the p-value is only accurate if
one tests a simple hypothesis, i.e. the parameters of the reference
distribution are known. If the parameters are estimated from the data
(composite hypothesis), the computed p-value is not reliable.
References
----------
.. [1] Cramér-von Mises criterion, Wikipedia,
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion
.. [2] Csörgő, S. and Faraway, J. (1996). The Exact and Asymptotic
Distribution of Cramér-von Mises Statistics. Journal of the
Royal Statistical Society, pp. 221-234.
Examples
--------
Suppose we wish to test whether data generated by ``scipy.stats.norm.rvs``
were, in fact, drawn from the standard normal distribution. We choose a
significance level of ``alpha=0.05``.
>>> import numpy as np
>>> from scipy import stats
>>> rng = np.random.default_rng(165417232101553420507139617764912913465)
>>> x = stats.norm.rvs(size=500, random_state=rng)
>>> res = stats.cramervonmises(x, 'norm')
>>> res.statistic, res.pvalue
(0.1072085112565724, 0.5508482238203407)
The p-value exceeds our chosen significance level, so we do not
reject the null hypothesis that the observed sample is drawn from the
standard normal distribution.
Now suppose we wish to check whether the same samples shifted by 2.1 is
consistent with being drawn from a normal distribution with a mean of 2.
>>> y = x + 2.1
>>> res = stats.cramervonmises(y, 'norm', args=(2,))
>>> res.statistic, res.pvalue
(0.8364446265294695, 0.00596286797008283)
Here we have used the `args` keyword to specify the mean (``loc``)
of the normal distribution to test the data against. This is equivalent
to the following, in which we create a frozen normal distribution with
mean 2.1, then pass its ``cdf`` method as an argument.
>>> frozen_dist = stats.norm(loc=2)
>>> res = stats.cramervonmises(y, frozen_dist.cdf)
>>> res.statistic, res.pvalue
(0.8364446265294695, 0.00596286797008283)
In either case, we would reject the null hypothesis that the observed
sample is drawn from a normal distribution with a mean of 2 (and default
variance of 1) because the p-value is less than our chosen
significance level.
"""
if isinstance(cdf, str):
cdf = getattr(distributions, cdf).cdf
vals = np.sort(np.asarray(rvs))
if vals.size <= 1:
raise ValueError('The sample must contain at least two observations.')
if vals.ndim > 1:
raise ValueError('The sample must be one-dimensional.')
n = len(vals)
cdfvals = cdf(vals, *args)
u = (2*np.arange(1, n+1) - 1)/(2*n)
w = 1/(12*n) + np.sum((u - cdfvals)**2)
# avoid small negative values that can occur due to the approximation
p = max(0, 1. - _cdf_cvm(w, n))
return CramerVonMisesResult(statistic=w, pvalue=p)
def _get_wilcoxon_distr(n):
"""
Distribution of probability of the Wilcoxon ranksum statistic r_plus (sum
of ranks of positive differences).
Returns an array with the probabilities of all the possible ranks
r = 0, ..., n*(n+1)/2
"""
c = np.ones(1, dtype=np.double)
for k in range(1, n + 1):
prev_c = c
c = np.zeros(k * (k + 1) // 2 + 1, dtype=np.double)
m = len(prev_c)
c[:m] = prev_c * 0.5
c[-m:] += prev_c * 0.5
return c
def _get_wilcoxon_distr2(n):
"""
Distribution of probability of the Wilcoxon ranksum statistic r_plus (sum
of ranks of positive differences).
Returns an array with the probabilities of all the possible ranks
r = 0, ..., n*(n+1)/2
This is a slower reference function
References
----------
.. [1] 1. Harris T, Hardin JW. Exact Wilcoxon Signed-Rank and Wilcoxon
Mann-Whitney Ranksum Tests. The Stata Journal. 2013;13(2):337-343.
"""
ai = np.arange(1, n+1)[:, None]
t = n*(n+1)/2
q = 2*t
j = np.arange(q)
theta = 2*np.pi/q*j
phi_sp = np.prod(np.cos(theta*ai), axis=0)
phi_s = np.exp(1j*theta*t) * phi_sp
p = np.real(ifft(phi_s))
res = np.zeros(int(t)+1)
res[:-1:] = p[::2]
res[0] /= 2
res[-1] = res[0]
return res
def _tau_b(A):
"""Calculate Kendall's tau-b and p-value from contingency table."""
# See [2] 2.2 and 4.2
# contingency table must be truly 2D
if A.shape[0] == 1 or A.shape[1] == 1:
return np.nan, np.nan
NA = A.sum()
PA = _P(A)
QA = _Q(A)
Sri2 = (A.sum(axis=1)**2).sum()
Scj2 = (A.sum(axis=0)**2).sum()
denominator = (NA**2 - Sri2)*(NA**2 - Scj2)
tau = (PA-QA)/(denominator)**0.5
numerator = 4*(_a_ij_Aij_Dij2(A) - (PA - QA)**2 / NA)
s02_tau_b = numerator/denominator
if s02_tau_b == 0: # Avoid divide by zero
return tau, 0
Z = tau/s02_tau_b**0.5
p = 2*norm.sf(abs(Z)) # 2-sided p-value
return tau, p
def _somers_d(A, alternative='two-sided'):
"""Calculate Somers' D and p-value from contingency table."""
# See [3] page 1740
# contingency table must be truly 2D
if A.shape[0] <= 1 or A.shape[1] <= 1:
return np.nan, np.nan
NA = A.sum()
NA2 = NA**2
PA = _P(A)
QA = _Q(A)
Sri2 = (A.sum(axis=1)**2).sum()
d = (PA - QA)/(NA2 - Sri2)
S = _a_ij_Aij_Dij2(A) - (PA-QA)**2/NA
with np.errstate(divide='ignore'):
Z = (PA - QA)/(4*(S))**0.5
_, p = scipy.stats._stats_py._normtest_finish(Z, alternative)
return d, p
@dataclass
class SomersDResult:
statistic: float
pvalue: float
table: np.ndarray
def somersd(x, y=None, alternative='two-sided'):
r"""Calculates Somers' D, an asymmetric measure of ordinal association.
Like Kendall's :math:`\tau`, Somers' :math:`D` is a measure of the
correspondence between two rankings. Both statistics consider the
difference between the number of concordant and discordant pairs in two
rankings :math:`X` and :math:`Y`, and both are normalized such that values
close to 1 indicate strong agreement and values close to -1 indicate
strong disagreement. They differ in how they are normalized. To show the
relationship, Somers' :math:`D` can be defined in terms of Kendall's
:math:`\tau_a`:
.. math::
D(Y|X) = \frac{\tau_a(X, Y)}{\tau_a(X, X)}
Suppose the first ranking :math:`X` has :math:`r` distinct ranks and the
second ranking :math:`Y` has :math:`s` distinct ranks. These two lists of
:math:`n` rankings can also be viewed as an :math:`r \times s` contingency
table in which element :math:`i, j` is the number of rank pairs with rank
:math:`i` in ranking :math:`X` and rank :math:`j` in ranking :math:`Y`.
Accordingly, `somersd` also allows the input data to be supplied as a
single, 2D contingency table instead of as two separate, 1D rankings.
Note that the definition of Somers' :math:`D` is asymmetric: in general,
:math:`D(Y|X) \neq D(X|Y)`. ``somersd(x, y)`` calculates Somers'
:math:`D(Y|X)`: the "row" variable :math:`X` is treated as an independent
variable, and the "column" variable :math:`Y` is dependent. For Somers'
:math:`D(X|Y)`, swap the input lists or transpose the input table.
Parameters
----------
x : array_like
1D array of rankings, treated as the (row) independent variable.
Alternatively, a 2D contingency table.
y : array_like, optional
If `x` is a 1D array of rankings, `y` is a 1D array of rankings of the
same length, treated as the (column) dependent variable.
If `x` is 2D, `y` is ignored.
alternative : {'two-sided', 'less', 'greater'}, optional
Defines the alternative hypothesis. Default is 'two-sided'.
The following options are available:
* 'two-sided': the rank correlation is nonzero
* 'less': the rank correlation is negative (less than zero)
* 'greater': the rank correlation is positive (greater than zero)
Returns
-------
res : SomersDResult
A `SomersDResult` object with the following fields:
statistic : float
The Somers' :math:`D` statistic.
pvalue : float
The p-value for a hypothesis test whose null
hypothesis is an absence of association, :math:`D=0`.
See notes for more information.
table : 2D array
The contingency table formed from rankings `x` and `y` (or the
provided contingency table, if `x` is a 2D array)
See Also
--------
kendalltau : Calculates Kendall's tau, another correlation measure.
weightedtau : Computes a weighted version of Kendall's tau.
spearmanr : Calculates a Spearman rank-order correlation coefficient.
pearsonr : Calculates a Pearson correlation coefficient.
Notes
-----
This function follows the contingency table approach of [2]_ and
[3]_. *p*-values are computed based on an asymptotic approximation of
the test statistic distribution under the null hypothesis :math:`D=0`.
Theoretically, hypothesis tests based on Kendall's :math:`tau` and Somers'
:math:`D` should be identical.
However, the *p*-values returned by `kendalltau` are based
on the null hypothesis of *independence* between :math:`X` and :math:`Y`
(i.e. the population from which pairs in :math:`X` and :math:`Y` are
sampled contains equal numbers of all possible pairs), which is more
specific than the null hypothesis :math:`D=0` used here. If the null
hypothesis of independence is desired, it is acceptable to use the
*p*-value returned by `kendalltau` with the statistic returned by
`somersd` and vice versa. For more information, see [2]_.
Contingency tables are formatted according to the convention used by
SAS and R: the first ranking supplied (``x``) is the "row" variable, and
the second ranking supplied (``y``) is the "column" variable. This is
opposite the convention of Somers' original paper [1]_.
References
----------
.. [1] Robert H. Somers, "A New Asymmetric Measure of Association for
Ordinal Variables", *American Sociological Review*, Vol. 27, No. 6,
pp. 799--811, 1962.
.. [2] Morton B. Brown and Jacqueline K. Benedetti, "Sampling Behavior of
Tests for Correlation in Two-Way Contingency Tables", *Journal of
the American Statistical Association* Vol. 72, No. 358, pp.
309--315, 1977.
.. [3] SAS Institute, Inc., "The FREQ Procedure (Book Excerpt)",
*SAS/STAT 9.2 User's Guide, Second Edition*, SAS Publishing, 2009.
.. [4] Laerd Statistics, "Somers' d using SPSS Statistics", *SPSS
Statistics Tutorials and Statistical Guides*,
https://statistics.laerd.com/spss-tutorials/somers-d-using-spss-statistics.php,
Accessed July 31, 2020.
Examples
--------
We calculate Somers' D for the example given in [4]_, in which a hotel
chain owner seeks to determine the association between hotel room
cleanliness and customer satisfaction. The independent variable, hotel
room cleanliness, is ranked on an ordinal scale: "below average (1)",
"average (2)", or "above average (3)". The dependent variable, customer
satisfaction, is ranked on a second scale: "very dissatisfied (1)",
"moderately dissatisfied (2)", "neither dissatisfied nor satisfied (3)",
"moderately satisfied (4)", or "very satisfied (5)". 189 customers
respond to the survey, and the results are cast into a contingency table
with the hotel room cleanliness as the "row" variable and customer
satisfaction as the "column" variable.
+-----+-----+-----+-----+-----+-----+
| | (1) | (2) | (3) | (4) | (5) |
+=====+=====+=====+=====+=====+=====+
| (1) | 27 | 25 | 14 | 7 | 0 |
+-----+-----+-----+-----+-----+-----+
| (2) | 7 | 14 | 18 | 35 | 12 |
+-----+-----+-----+-----+-----+-----+
| (3) | 1 | 3 | 2 | 7 | 17 |
+-----+-----+-----+-----+-----+-----+
For example, 27 customers assigned their room a cleanliness ranking of
"below average (1)" and a corresponding satisfaction of "very
dissatisfied (1)". We perform the analysis as follows.
>>> from scipy.stats import somersd
>>> table = [[27, 25, 14, 7, 0], [7, 14, 18, 35, 12], [1, 3, 2, 7, 17]]
>>> res = somersd(table)
>>> res.statistic
0.6032766111513396
>>> res.pvalue
1.0007091191074533e-27
The value of the Somers' D statistic is approximately 0.6, indicating
a positive correlation between room cleanliness and customer satisfaction
in the sample.
The *p*-value is very small, indicating a very small probability of
observing such an extreme value of the statistic under the null
hypothesis that the statistic of the entire population (from which
our sample of 189 customers is drawn) is zero. This supports the
alternative hypothesis that the true value of Somers' D for the population
is nonzero.
"""
x, y = np.array(x), np.array(y)
if x.ndim == 1:
if x.size != y.size:
raise ValueError("Rankings must be of equal length.")
table = scipy.stats.contingency.crosstab(x, y)[1]
elif x.ndim == 2:
if np.any(x < 0):
raise ValueError("All elements of the contingency table must be "
"non-negative.")
if np.any(x != x.astype(int)):
raise ValueError("All elements of the contingency table must be "
"integer.")
if x.nonzero()[0].size < 2:
raise ValueError("At least two elements of the contingency table "
"must be nonzero.")
table = x
else:
raise ValueError("x must be either a 1D or 2D array")
# The table type is converted to a float to avoid an integer overflow
d, p = _somers_d(table.astype(float), alternative)
# add alias for consistency with other correlation functions
res = SomersDResult(d, p, table)
res.correlation = d
return res
# This could be combined with `_all_partitions` in `_resampling.py`
def _all_partitions(nx, ny):
"""
Partition a set of indices into two fixed-length sets in all possible ways
Partition a set of indices 0 ... nx + ny - 1 into two sets of length nx and
ny in all possible ways (ignoring order of elements).
"""
z = np.arange(nx+ny)
for c in combinations(z, nx):
x = np.array(c)
mask = np.ones(nx+ny, bool)
mask[x] = False
y = z[mask]
yield x, y
def _compute_log_combinations(n):
"""Compute all log combination of C(n, k)."""
gammaln_arr = gammaln(np.arange(n + 1) + 1)
return gammaln(n + 1) - gammaln_arr - gammaln_arr[::-1]
@dataclass
class BarnardExactResult:
statistic: float
pvalue: float
def barnard_exact(table, alternative="two-sided", pooled=True, n=32):
r"""Perform a Barnard exact test on a 2x2 contingency table.
Parameters
----------
table : array_like of ints
A 2x2 contingency table. Elements should be non-negative integers.
alternative : {'two-sided', 'less', 'greater'}, optional
Defines the null and alternative hypotheses. Default is 'two-sided'.
Please see explanations in the Notes section below.
pooled : bool, optional
Whether to compute score statistic with pooled variance (as in
Student's t-test, for example) or unpooled variance (as in Welch's
t-test). Default is ``True``.
n : int, optional
Number of sampling points used in the construction of the sampling
method. Note that this argument will automatically be converted to
the next higher power of 2 since `scipy.stats.qmc.Sobol` is used to
select sample points. Default is 32. Must be positive. In most cases,
32 points is enough to reach good precision. More points comes at
performance cost.
Returns
-------
ber : BarnardExactResult
A result object with the following attributes.
statistic : float
The Wald statistic with pooled or unpooled variance, depending
on the user choice of `pooled`.
pvalue : float
P-value, the probability of obtaining a distribution at least as
extreme as the one that was actually observed, assuming that the
null hypothesis is true.
See Also
--------
chi2_contingency : Chi-square test of independence of variables in a
contingency table.
fisher_exact : Fisher exact test on a 2x2 contingency table.
boschloo_exact : Boschloo's exact test on a 2x2 contingency table,
which is an uniformly more powerful alternative to Fisher's exact test.
Notes
-----
Barnard's test is an exact test used in the analysis of contingency
tables. It examines the association of two categorical variables, and
is a more powerful alternative than Fisher's exact test
for 2x2 contingency tables.
Let's define :math:`X_0` a 2x2 matrix representing the observed sample,
where each column stores the binomial experiment, as in the example
below. Let's also define :math:`p_1, p_2` the theoretical binomial
probabilities for :math:`x_{11}` and :math:`x_{12}`. When using
Barnard exact test, we can assert three different null hypotheses :
- :math:`H_0 : p_1 \geq p_2` versus :math:`H_1 : p_1 < p_2`,
with `alternative` = "less"
- :math:`H_0 : p_1 \leq p_2` versus :math:`H_1 : p_1 > p_2`,
with `alternative` = "greater"
- :math:`H_0 : p_1 = p_2` versus :math:`H_1 : p_1 \neq p_2`,
with `alternative` = "two-sided" (default one)
In order to compute Barnard's exact test, we are using the Wald
statistic [3]_ with pooled or unpooled variance.
Under the default assumption that both variances are equal
(``pooled = True``), the statistic is computed as:
.. math::
T(X) = \frac{
\hat{p}_1 - \hat{p}_2
}{
\sqrt{
\hat{p}(1 - \hat{p})
(\frac{1}{c_1} +
\frac{1}{c_2})
}
}