Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: stats: add nonparametric one-sample quantile test and CI #12680

Merged
merged 83 commits into from
Aug 4, 2023
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
83 commits
Select commit Hold shift + click to select a range
ad36fc4
mwe of the function, tests added but does not work yet
romain-jacob Aug 4, 2020
896e431
added self to contrib list"
romain-jacob Aug 5, 2020
70304eb
added function name to __init__ to appear in the online doc
romain-jacob Aug 5, 2020
e3571b6
function is ready, docstring and tests to finish
romain-jacob Aug 5, 2020
2c3a3d7
ENH: added non-parametric confidence intervals for quantiles into sci…
romain-jacob Aug 7, 2020
fb5f8a8
Merge branch 'master' into confint
romain-jacob Aug 7, 2020
9e33d1e
fixed docstring
romain-jacob Aug 7, 2020
e179d68
merge new commits
romain-jacob Aug 7, 2020
8483fc4
Merge branch 'confint' of github.com:romain-jacob/scipy into confint
romain-jacob Aug 7, 2020
07abe2f
fixed pep8 typos and doctest typos
romain-jacob Aug 7, 2020
59bb0ee
improved efficiency by using binom.isf
romain-jacob Aug 10, 2020
2a5c286
added None return to the docstring
romain-jacob Aug 11, 2020
ecff946
fixed docstring bug
romain-jacob Aug 11, 2020
1b3542b
docstring updates
romain-jacob Aug 17, 2020
3332939
Merge branch 'master' of https://github.com/scipy/scipy into confint
romain-jacob Aug 17, 2020
5c164ac
Merge branch 'master' of https://github.com/scipy/scipy into confint
romain-jacob Aug 30, 2020
b9dab64
reverted content of THANKS.txt
romain-jacob Aug 30, 2020
342278b
added reference to a related R package
romain-jacob Oct 7, 2020
5db55d2
fixed doctring lines being too long
romain-jacob Oct 8, 2020
bcfe26c
corrected line length issues
romain-jacob Oct 9, 2020
765b1ce
update version number
romain-jacob Jan 6, 2021
3cb4f3b
corrected typo in docstring
romain-jacob Jan 6, 2021
3586c86
corrected typo in doctring
romain-jacob Jan 6, 2021
948810a
typo in doctring
romain-jacob Jan 6, 2021
de88bd0
Merge branch 'master' into confint
mdhaber Mar 22, 2021
84c33e6
pass on docstring
romain-jacob May 27, 2021
790cd76
Updated unit test for confint_quantile
romain-jacob May 27, 2021
59d37b5
merge master into confint
romain-jacob May 27, 2021
c528102
MAINT: stats: confint: revert undesired changes to stats.py
mdhaber May 27, 2021
ad996a5
DOC: stats: confint_quantile: add whitespace back to docstrings
mdhaber May 29, 2021
c32fd1f
DOC: stats: confint_quantile: add carriage return back to docstringsU…
romain-jacob May 31, 2021
862701d
DOC: stats: confint_quantile: add carriage return back to docstringsU…
romain-jacob May 31, 2021
21842fe
DOC: stats: confint_quantile: add carriage return back to docstringsU…
romain-jacob May 31, 2021
786a89b
DOC: fixed docstring test
romain-jacob May 31, 2021
9ea76a6
DOC: fixed doctest
romain-jacob May 31, 2021
5c34c14
DOC: fixed doctest
romain-jacob May 31, 2021
222137e
merge main
romain-jacob Feb 28, 2022
acce928
MAINT: added back the confint code
romain-jacob Feb 28, 2022
6b9355c
Merge branch 'main' into confint
mdhaber Aug 6, 2022
74764b8
MAINT: stats.confint_quantile: fix merge issues
mdhaber Aug 6, 2022
2e95192
STY: stats.confint_quantile: PEP8
mdhaber Aug 29, 2022
7e75fe7
Merge remote-tracking branch 'upstream/main' into confint
mdhaber Aug 29, 2022
9a9fdab
ENH: stats.quantile_test: add quantile test and move CI to method of …
mdhaber Sep 11, 2022
ee9eae8
MAINT: stats.quantile_test: adjust (correct?) `confidence_interval`
mdhaber Sep 12, 2022
db934ff
Merge remote-tracking branch 'upstream/main' into confint
mdhaber Sep 12, 2022
9bf4fc3
DOC: stats.quantile_test: update documentation
mdhaber Sep 13, 2022
89d09e0
Merge branch 'main' into confint
mdhaber Nov 18, 2022
616c59a
FIX: correct p-values for the Quantile test
romain-jacob Dec 14, 2022
fa9599f
Merge branch 'main' into confint
mdhaber Feb 12, 2023
b020bff
some PEP8 formatting
romain-jacob Mar 11, 2023
daf468a
comment update
romain-jacob Mar 11, 2023
b29d329
Cleanup
romain-jacob Mar 11, 2023
0d072f7
cleanup
romain-jacob Mar 11, 2023
ac56ad7
Fixed comment typo
romain-jacob Mar 11, 2023
1661387
doc typo
romain-jacob Mar 11, 2023
83540c7
doc typo
romain-jacob Mar 11, 2023
eb78bd7
added IV test for quantile_test
romain-jacob Mar 14, 2023
9d01d42
wip tests for QuantileTest
romain-jacob Mar 14, 2023
dcca5d6
cleaned up the alternative IV test
romain-jacob Mar 21, 2023
6b48ba3
unitests for QuantileTest
romain-jacob May 24, 2023
03909db
PEP8 pass
romain-jacob Jun 5, 2023
16c89eb
added example for the confidence_interval method
romain-jacob Jun 5, 2023
8984beb
Merge remote-tracking branch 'upstream/main' into confint
mdhaber Jun 6, 2023
5c13104
Apply suggestions from code review
mdhaber Jun 8, 2023
4d05210
updated Notes section of the docstring
romain-jacob Jun 13, 2023
f5d8ae0
added examples
romain-jacob Jun 14, 2023
82b9074
TST: stats.quantile_test: adjust tests
mdhaber Jun 15, 2023
9053150
Fix typo in the example
romain-jacob Jun 16, 2023
45b0029
Text improvement in the example
romain-jacob Jun 16, 2023
6316cc6
Improvement example text
romain-jacob Jun 16, 2023
297526c
Simplification iv test
romain-jacob Jun 16, 2023
37ac396
Fixing typo in comments
romain-jacob Jun 16, 2023
9e33a97
Merge branch 'confint' into gh12680
romain-jacob Jun 19, 2023
a0cc90a
Merge pull request #1 from mdhaber/gh12680
romain-jacob Jun 19, 2023
cb3f4b0
MAINT: stats.quantile_test: rewriting to follow Conover's book [skip ci]
romain-jacob Jun 19, 2023
a153ed4
TST: stats.quantile_test: use output of R implem in test [skip ci]
romain-jacob Jun 19, 2023
6e3bc47
MAINT:stats.quantile_test: using dataclass instead of tuple_bunch [sk…
romain-jacob Jun 19, 2023
5b2c03a
TST: stats.quantile_test: adding Conover examples as tests [skip ci]
romain-jacob Jun 19, 2023
e7b1144
MAINT: stats.quantile_test: more adjustments per review
mdhaber Jun 20, 2023
5be7859
Merge remote-tracking branch 'upstream/main' into confint
mdhaber Jul 8, 2023
dbf4734
TST: stats.quantile_test: strengthen test of CIs vs p-values
mdhaber Jul 8, 2023
b5f1b67
DOC: Docstring updates + change result attributes
steppi Jul 12, 2023
5bcd5e1
DOC: More doc fixes [skip actions] [skip cirrus]
romain-jacob Jul 19, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 2 additions & 0 deletions THANKS.txt
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,8 @@ Wesley Alves for improvements to scipy.stats.jarque_bera and scipy.stats.shapiro
Mark Borgerding for contributing linalg.convolution_matrix.
Shashaank N for contributions to scipy.signal.
Frank Torres for fixing a bug with solve_bvp for large problems.
Romain Jacob for non-paremetric confidence intervals for quantiles
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved
added in scipy.stats
Ben West for updating the Gamma distribution documentation.

Institutions
Expand Down
1 change: 1 addition & 0 deletions scipy/stats/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,7 @@
entropy
median_absolute_deviation
median_abs_deviation
confint_quantile

Frequency statistics
====================
Expand Down
213 changes: 212 additions & 1 deletion scipy/stats/stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,14 +179,14 @@
from scipy import linalg
from . import distributions
from . import mstats_basic
from ._discrete_distns import binom
from ._stats_mstats_common import (_find_repeats, linregress, theilslopes,
siegelslopes)
from ._stats import (_kendall_dis, _toint64, _weightedrankedtau,
_local_correlations)
from ._rvs_sampling import rvs_ratio_uniforms
from ._hypotests import epps_singleton_2samp


__all__ = ['find_repeats', 'gmean', 'hmean', 'mode', 'tmean', 'tvar',
'tmin', 'tmax', 'tstd', 'tsem', 'moment', 'variation',
'skew', 'kurtosis', 'describe', 'skewtest', 'kurtosistest',
Expand All @@ -207,6 +207,7 @@
'kstest', 'ks_1samp', 'ks_2samp',
'chisquare', 'power_divergence', 'mannwhitneyu',
'tiecorrect', 'ranksums', 'kruskal', 'friedmanchisquare',
'confint_quantile',
'rankdata', 'rvs_ratio_uniforms',
'combine_pvalues', 'wasserstein_distance', 'energy_distance',
'brunnermunzel', 'epps_singleton_2samp']
Expand Down Expand Up @@ -7469,6 +7470,216 @@ def brunnermunzel(x, y, alternative="two-sided", distribution="t",
return BrunnerMunzelResult(wbfn, p)


def _confint_lowerbound(n, quantile, confidence):
r"""
Compute the lower bound for a one-sided confidence interval
mdhaber marked this conversation as resolved.
Show resolved Hide resolved
for a given
- quantile (0<`quantile`<1)
- confidence level (0<`confidence`<1)
- number of samples `n`.

Returns the largest index of the sample being a valid lower bound,
or `None` if there are not enough samples to derive one.

Used by the public function confint_quantile().

.. versionadded:: 1.6.0
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved
"""

# compute all probabilities from the binomial distribution for the quantile of interest
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved
bd = binom(n, quantile)

# the lower bound is the last index before the invert survival function value for
# the target confidence level
lb = bd.isf(confidence) - 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the off-by-one possibility I mentioned, and I think it makes sense the way you have it. The intent of subtracting one is that it guarantees that:

bd.sf(lb) >= confidence > bd.sf(lb+1)

Is that right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes


if lb < 0: # isf returns -1 if there are no matching index
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you come up with a test case?

return None
else:
return int(lb)


def confint_quantile(x, quantile, confidence, type='one-sided'):
r"""Compute non-parametric confidence intervals for any quantile.

This function implements a non-parametric approach to compute
confidence intervals for quantiles. The approach is attributed to Thompson [1]_
and later proven to be applicable to any set of i.i.d. samples [2]_.
The computation is based on the observation that the probability of a quantile
:math:`q` to be larger than any sample :math:`x_m (1\leq m \leq N)`
can be computed as

.. math::

\mathbb{P}(x_m \leq q) = 1 - \sum_{k=0}^{m-1} \binom{N}{k} q^k(1-q)^{N-k}

Furthermore, these probabilities are symmetric, which allows to compute both
upper and lower bounds from the same computation:

.. math::

\mathbb{P}(x_m \leq q) = \mathbb{P}(x_{N-m+1} \geq 1-q).

The function computes confidence intervals for a given quantile and
confidence level, based on `x` which is either a set of samples
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved
(one-dimensional array_like) or the number of samples available.
The confidence intervals are valid if and only if the samples are i.i.d.

Both one-sided and two-sided confidence intervals can be obtained
(default is one-sided). The function returns two values: either the bounds for the two one-sided
confidence intervals, or the lower and upper bounds of a two-sided confidence interval.
The return values are either the indexes of the bounds (if `x` is an integer) or
sample values (if `x` is the set of samples).
`None` is returned when there are not enough samples to compute
the desired confidence interval.

There is no uniqueness of the two-sided confidence interval (see Notes below).
Without further assumption on the samples (eg, the nature of the underlying distribution),
the one-sided intervals are optimally tight.

Parameters
----------
x : array_like or int
Array of samples, should be one-dimensional.
If integer, taken as the number of samples available (strictly positive)
quantile : float
The quantile for which we want to compute the confidence interval.
Must be strictly between 0 and 1.
confidence : float
The desired confidence level of the confidence interval.
Must be strictly between 0 and 1.
type : {'one-sided', 'two-sided'}, optional
Defines the type of confidence interval computed.
Default is 'one-sided'.

* 'one-sided' : computes the best possible one-sided confidence intervals (both lower and upper bounds) for the given quantile.
* 'two-sided' : computes a two-sided confidence interval by combination of two one-sided intervals. E.g., a 90% two-sided interval is computed by combining two 95% one-sided intervals
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved

Returns
-------
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to think more about this. In hindsight, having stats functions return individual variables (or tuples) has made them difficult to extend in the future. I think we're going to want to return an object here. Is there anything else that object should contain (e.g. estimate of the quantile)? (This gets me thinking about whether this function should be actually be called quantile and basically be an extension of np.quantile that returns not only an estimate of the underlying distribution's quantile but also a confidence interval.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have anything against returning an object (I quite like the idea actually). If we go down this way, then I would return an object called e.g. CI with pretty much all information. That is, with the following properties:

CI.quantile       # the estimated quantile
CI.conf_level     # the (desired) confidence level
CI.type           # one- or two-sided interval
CI.sample_size    # the size of the sample used to compute the CI
CI.lb             # the index of the CI lower bound
CI.ub             # the index of the CI upper bound
CI.lb_value       # the value of the CI lower bound (if available)
CI.ub_value       # the value of the CI upper bound (if available)

Regarding your suggestion to make this function an extension of np.quantile:
I think it makes sense, but I fear it would make the function usage somehow blurry/hard to find. My thinking is, many people do not use CIs because they don't know much about them and it's not (always) easy to find an implementation to compute them. In order to fix that, I think having a function that is explicitly made for computing CIs is better than mixing everything into a global quantile function that would do everything (multi-dim, different interpolations for empirical quantile, etc.)

Maybe more fundamentally, np.quantile deals with empirical quantiles, while here we aim to estimate distribution quantiles; now that I think about it this way, I become even more against merging the two. These two concepts are already confused enough :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Maybe scipy.stats.quantile is not what we want, but I think we'll need to think about the interface a little more. We don't really have an established pattern for confidence intervals to follow, so we want to structure things in a way that can be extended.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm all in for that, and I'll gladly do the extra effort to make things more extendable. I'm just not most experience to judge/have an opinion on how that would look like.

LB : float or int or `None`
value or index of the lower bound of

* the right-open one-sided confidence interval (default, ``type=one-sided``),
* a two-sided confidence interval (if ``type=two-sided``)

`None` is returned when there are not enough samples to compute
the confidence interval with the desired level of confidence.
UB : float or int or None
value or index of the upper bound of

* the left-open one-sided confidence interval (default, ``type=one-sided``),
* a two-sided confidence interval (if ``type=two-sided``)

`None` is returned when there are not enough samples to compute
the confidence interval with the desired level of confidence.

Notes
-----
Two-sided confidence intervals are not guaranteed to be optimal.
I.e., there may exist a tighter interval that may contain the quantile
of interest with probability larger than the confidence level.
These intervals may be found by exhaustive search,
which we do not do for efficiency reasons.

References
----------
.. [1] W. R. Thompson, "On Confidence Ranges for the Median and
Other Expectation Distributions for Populations of Unknown
Distribution Form," The Annals of Mathematical Statistics,
vol. 7, no. 3, pp. 122-128, 1936,
Accessed: Sep. 18, 2019. [Online].
Available: https://www.jstor.org/stable/2957563.
.. [2] H. A. David and H. N. Nagaraja, "Order Statistics in
Nonparametric Inference" in Order Statistics,
John Wiley & Sons, Ltd, 2005, pp. 159-170.


Examples
--------
>>> from scipy.stats import confint_quantile
>>> x = [2, 8, 3, 6, 4, 1, 5, 9, 7]
>>> confint_quantile(x, 0.5, 0.95)
(2, 8)
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved

To compute a two-sided interval instead, use the `type` parameter.

>>> confint_quantile(x, 0.5, 0.99, type='two-sided')
(1, 9)

You can also pass the number of samples as argument (instead of the samples)
themselves. The returned values are then the indexes of the upper and lower
bounds for the confidence intervals.

>>> N = 20
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved
>>> confint_quantile(N, 0.75, 0.90)
(11, 17)


.. versionadded:: 1.6.0
"""

##
# Checking the inputs
#
# x can be either an integer or a one-dimensional array-like
if isinstance(x, int):
if x < 1:
raise ValueError("Invalid parameter: "+repr(x)+", `x` must be either a strictly positive integer or one-dimensional array-like.")
n = x
return_index = True # The function will returns the confint indexes
else:
x = np.asarray(x)
if x.ndim != 1:
raise ValueError("Invalid parameter: "+repr(x)+", `x` must be either a strictly positive integer or one-dimensional array-like.")
x = np.sort(x, axis=0)
n = x.shape[0]
return_index = False # The function will returns the confint as values of x
#
# `confidence` and `quantile` must be between 0 and 1
if confidence >= 1 or confidence <= 0:
raise ValueError("Invalid `confidence`: "+repr(confidence)+". Provide a real number strictly between 0 and 1.")
if quantile >= 1 or quantile <= 0:
raise ValueError("Invalid `quantile`: "+repr(quantile)+". Provide a real number strictly between 0 and 1.")
#
# `type` can be only `one-sided` or `two-sided`
if not (type == 'one-sided' or type == 'two-sided'):
raise ValueError("Invalid parameter: "+repr(type)+". Valid 'type' values: 'one-sided' or 'two-sided'")
##

# Handle the type of intervals (one- or two-sided)
if type == 'two-sided':
conf_working = (1+confidence)/2
else:
# type == 'one-sided'
conf_working = confidence

# Compute the lower bound
LB = _confint_lowerbound(n, quantile, conf_working)
mdhaber marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use lower-case lb and ub for the things that are returned, and something else for the temporary lb variable?


# Compute the upper bound
# -> deduced from the lower bound of (1-quantile)
lb = _confint_lowerbound(n, 1-quantile, conf_working)
if lb is None:
UB = None
else:
UB = ((n-1) - lb) # First index is 0 (not 1), hence the -1

if return_index:
return LB, UB
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved
else:
# Handle unfeasible bounds
if LB is None:
x_lb = None
else:
x_lb = x[LB]
if UB is None:
x_ub = None
else:
x_ub = x[UB]
return x_lb, x_ub


def combine_pvalues(pvalues, method='fisher', weights=None):
"""
Combine p-values from independent tests bearing upon the same hypothesis.
Expand Down
21 changes: 21 additions & 0 deletions scipy/stats/tests/test_stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -5700,3 +5700,24 @@ def test_dist_perm(self):
random_state=1)
assert_approx_equal(stat_dist, 0.163, significant=1)
assert_approx_equal(pvalue_dist, 0.001, significant=1)

class TestConfInt(object):
""" Test the computation of non-parametric
confidence intervals for quantiles
"""
X = array([2, 8, 3, 6, 4, 1, 5, 9, 7], float)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If these tests are from data this is published somewhere, so that we can confirm that these are the correct result, please list the reference.
If these tests don't confirm that confint_quantile is doing the right thing - just that it continues to do the same thing - then I would ask for stronger tests. Is there anything you can compare against?

Also, please test your input validation. Can you think of any edge cases that might make sense to test that aren't specifically checked by input validation?

I would also prefer to see additional sanity checks - like, for a large number of samples, does the confidence interval get tighter? So could we do a test on, say, np.linspace(0, 1, N) for some arbitrary quantile and large N and observe that the upper and lower bound are close to that value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the first point, I have not found something doing the exact same thing. This was discussed a bit above:

I just browsed through the R packages and found QuantileNPCI which seems to be doing about the same thing, although it seems it computes only two-sided CI (not 100% sure yet, I just had a quick look).
Unfortunately I am not proficient in R, so it's not easy for me to quickly compare the two implementations. But I can try computing the CI presented in the QuantileNPCI example; that will already do for a quick check.

I checked the example in the vignette of the QuantileNPCI R module. The computation method is similar, but there are some differences in the precise bound definition:

  • QuantileNPCI derives the fraction of the sample index that defines the bound with the desired level of confidence, which are computed by interpolation from the sample values
  • Our implementation is more conservative: we take as bound the first sample value that is guaranteed to give an CI with the desired level of confidence.

So in this context, I am not sure if it makes sense to write a benchmark using that R function (since they do not do exactly the same computation). In any case, I added the reference to the package in the function docstring.

If I understand the comment correctly, then the results should be the same when the fraction corresponds to an integer, i.e. no interpolation between two indices is needed.
(I have this case sometimes when writing unit tests for statsmodels, where I just compare cases that fall on integer values.)

I think so.

Do you think it would be worthwhile to include such a test?

Besides that, I can definitely add some more validation tests (that's the first time I wrote such code, I was not too sure how far I needed to go...). I will work on this and expand the test set.

Copy link
Contributor

@mdhaber mdhaber Jan 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Sometimes what I do, if possible, is write a minimalist implementation of the method right in the test suite (e.g. here). If you can boil it down to just a few lines that are easier to confirm (and if you write them on a different day, without copying and pasting so as not to be influenced by any possible mistakes above), it serves a bit like an independent check.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that should be fine. I do have a different implementation that I can use for this.

It was changed following earlier comments to leverage methods from the binom distribution. Anyhow, I see what I should do :-)


def test_index_equal_value(self):
assert_equal(stats.confint_quantile(X, 0.5, 0.9), (3.0, 7.0))
assert_equal(stats.confint_quantile(X.shape[0], 0.5, 0.9), (2, 6))

def test_twosided(self):
assert_equal(stats.confint_quantile(X.shape[0], 0.5, 0.9, type='two-sided'), (1, 7))

def test_values(self):
N, q, c = 100, 0.75, 0.95
assert_equal(stats.confint_quantile(N, q, c), (67, 82))
N, q, c = 10, 0.75, 0.95
assert_equal(stats.confint_quantile(N, q, c), (4, None))
N, q, c = 20, 0.175, 0.95
assert_equal(stats.confint_quantile(N, q, c), (0, 6))
romain-jacob marked this conversation as resolved.
Show resolved Hide resolved