-
-
Notifications
You must be signed in to change notification settings - Fork 30.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-uniformity in randrange for large arguments. #53271
Comments
Not a serious bug, but worth noting: The result of randrange(n) is not even close to uniform for large n. Witness the obvious skew in the following (this takes a minute or two to run, so you might want to reduce the range argument): Python 3.2a0 (py3k:81980, Jun 14 2010, 11:23:36)
[GCC 4.2.1 (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from random import randrange
>>> from collections import Counter
>>> Counter(randrange(6755399441055744) % 3 for _ in range(100000000))
Counter({1: 37508130, 0: 33323818, 2: 29168052}) (The actual probabilities here are, as you might guess from the above numbers: {0: 1/3, 1: 3/8, 2: 7/24}.) The cause: for n < 2**53, randrange(n) is effectively computed as int(random() * n). For small n, there's a tiny bias involved, but this is still an effective method. However, as n increases towards 2**53, the bias increases significantly. (For n >= 2**53, the random module uses a different strategy that *does* produce uniformly distributed results.) A solution would be to lower the cutoff point where randrange() switches from using int(random() * n) to using the _randbelow method. |
Note: the number 6755399441055744 is special: it's 0.75 * 2**53, and was deliberately chosen so that the non-uniformity is easily exhibited by looking at residues modulo 3. For other numbers of this size, the non-uniformity is just as bad, but demonstrating the non-uniformity clearly would have taken a little more work. |
Here's an example patch that removes any bias from randrange(n) (except for bias resulting from the imperfectness of the core MT generator). I added a small private method to Modules/_randommodule.c to aid the computation. This only fixes one instance of int(random() * n) in the Lib/random.py source; the other instances should be modified accordingly. With this patch, randrange is a touch faster than before (20-30% speedup) for small arguments. Is this worth pursuing? |
The nonuniformity of randrange has a knock-on effect in other random module functions. For example, take a sample of 100 elements from range(6004799503160661), and take the smallest element from that sample. Then the exact distribution of that smallest element is somewhat complicated, but you'd expect it to be even with probability very close to 50%. But it turns out that it's roughly twice as likely to be even as to be odd. >>> from random import sample
>>> from collections import Counter
>>> population = range(6004799503160661)
>>> Counter(min(sample(population, 100)) % 2 for _ in range(100000))
Counter({0: 66810, 1: 33190}) |
Here's a more careful Python-only patch that fixes the bias in randrange and randint (but not in shuffle, choice or sample). It should work well both for Mersenne Twister and for subclasses of Random that use a poorer PRNG with badly-behaved low-order bits. |
Will take a look at this in the next few days. |
I would prefer to see correct algorithm in stdlib and a recipe for how to reproduce old sequences for the users who care. |
FWIW, we spent ten years maintaining the ability to reproduce sequences. It has become an implicit promise. |
Hmm. I hadn't considered the reproducibility problem. Does the module aim for reproducibility across all platforms *and* all versions of Python? Or just one of those? For small n, I think the patched version of randrange(n) produces the same sequence as before with very high probability, but not with certainty. Since that sounds like a recipe for hard-to-find bugs, it might be better to deliberately perturb the outputs somehow so that the sequence is obviously different from before, rather than subtly different. |
'Random', without qualification, is commonly taken to mean 'with uniform distribution'. Otherwise it has no specific meaning and could well be a synonym for 'arbitrary' or 'haphazard'. The behavior reported is buggy and in my opinion should be fixed if possible. I have done simulation research in the past and do not consider them minor. If I had results that depended on these functions, I might want to rerun with the fixed versions to make sure the end results were not affected. I would certainly want the fixed behavior for any future work. I do not see any promise of reproducibility of sequences from version to version. I do not really see the point as one can rerun with the old Python version or copy the older random.py. The old versions could be kept with with an 'old_' prefix and documented in a separate subsection that starts with "Do not use these buggy old versions of x and y in new code. They are only present for those who want to reproduce old sequences." But I wonder how many people would use them. |
FWIW, here are two approaches to getting an equi-distributed version of int(n*random()) where 0 < n <= 2**53. The first mirrors the approach currently in the code. The second approach makes fewer calls to random(). def rnd1(n):
assert 0 < n <= 2**53
N = 1 << (n-1).bit_length()
r = int(N * random())
while r >= n:
r = int(N * random())
return r
def rnd2(n, N=1<<53):
assert 0 < n <= N
NN = N - (N % n) + 0.0 # largest multiple of n <= N
r = N * random() # a float with an integral value
while r >= NN:
r = N * random()
return int(r) % n |
Either of these looks good to me. If the last line of the second is changed from "return int(r) % n" to "return int(r) // (N // n)" then it'll use the high-order bits of random() instead of the low-order bits. This doesn't matter for MT, but might matter for subclasses of Random using a different underlying generator. |
This wouldn't be the first time reproduceability is dropped, since reading from the docs: “As an example of subclassing, the random module provides the WichmannHill class that implements an alternative generator in pure Python. The class provides a backward compatible way to reproduce results from earlier versions of Python, which used the Wichmann-Hill algorithm as the core generator.” Also:
IMO it should either be documented explicitly, or be taken less dearly. There's not much value in an "implicit promise" that's only known by a select few. (besides, as Terry said, I think most people are more concerned by the quality of the random distribution than by the reproduceability of sequences) |
I guess, Antoine wanted to point out this: "Changed in version 2.3: MersenneTwister replaced Wichmann-Hill as the But as the paragraph points out Python did provide non default WichmanHill My brief reading on this topic, does suggest that 'repeatability' is |
BTW, the Wichmann-Hill code is gone in py3k, so that doc paragraph needs removing or updating. |
Thanks guys, I've got it from here. Some considerations for the PRNG are:
I'm looking at several ideas:
Am raising the priority to normal because I think some effort needs to be made to address equidistribution in 3.2. Will take some time to come-up with a good patch that balances quality, simplicity, speed, thread-safety, and reproducibility. May also consult with Tim Peters who has previously voiced concerns about stripping bits off of multiple calls to random() because the MT proofs make no guarantees about quality in those cases. I don't think this is an issue in practice, but in theory when we start tossing out some of the calls to random(), we're also throwing away the guarantees of a long periodic, 623 dimensions, uniformity, and equidistribution. |
I believe a reasonable (com)promise would be to guarantee repeatability |
If recoding in C is acceptable, I think there may be better ( = simpler and faster) ways than doing a direct translation of rnd2. For example, for small k, the following algorithm for randrange(k) suffices:
I can provide code (with that 3rd step fully expanded) if you're interested in this approach. This is likely to be significantly faster than a direct translation of rnd32, since in the common case it requires only: one 32-bit deviate from MT, one integer multiplication, one subtraction, and one comparison. By comparison, rnd2 uses (at least) two 32-bit deviates and massages them into a float, before doing arithmetic with that float. Though it's possible (even probable) that any speed gain would be insignificant in comparison to the rest of the Python machinery involved in a single randrange call. |
Just to illustrate, here's a patch that adds a method Random._smallrandbelow, based on the algorithm I described above. |
Antoine, there does need to be repeatablity; there's no question about that. The open question for me is how to offer that repeatability in the cleanest manner. People use random.seed() for reproducible tests. They need it to have their studies become independently validatable, etc. Some people are pickling the state of the RNG and need to restart where they left off, etc. Mark, thanks for the alternative formulation. I'll take a look when I get a chance. I've got it from here. |
randint.py: another algorithm to generate a random integer in a range. It uses only operations on unsigned integers (no evil floatting point number). It calls tick() multiple times to generate enough entropy. It has an uniform distribution (if the input generator has an uniform distribution). tick() is simply the output of the Mersenne Twister generator. The algorithm can be optimized (especially the part computing ndigits and scale). |
Well, that doesn't address my proposal of making it repeatable accross If some people really need perpetual repeatability, I don't understand |
Distribution with my algorithm: ... => Counter({0L: 33342985, 2L: 33335781, 1L: 33321234}) Distribution: {0: 0.33342985000000003, 2: 0.33335780999999998, 1: 0.33321234} |
A couple of points: (1) In addition to documenting the extent of the repeatability, it would be good to have tests to prevent changes that inadvertently change the sequence of randrange values. (2) For large arguments, cross-platform reproducibility is already a bit fragile. For example, the _randbelow function depends on the system _log function---see the line k = int(1.00001 + _log(n-1, 2.0)) Now that we have the bit_length method available, it might be better to use that. |
Put in a fix with r84576. May come back to it to see if it can or should be optimized with C. For now, this gets the job done. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: