You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The below code sample demonstrates how a default-constructed uniform_real_distribution<float> (a=0, b=1) generates a 1.0f if provided a random number with all bits set:
Expected behavior
Regardless of the input received from the RNG, uniform_real_distribution should never produce b as the interval is supposed to be right-exclusive. GCC passes this test.
STL version
Microsoft Visual Studio Community 2019
Version 16.6.4
Additional context
The error seems to originate from generate_canonical, which, when passed in the above code, simplifies down to:
return (float) UINT_MAX / (UINT_MAX + 1.0f);
which, mathematically speaking, should be less than 1, but in practice gets rounded to 1.0f due to limited floating point precision. generate_canonical is flawed in other regards: using a normalizing/scaling division leads to non-uniformly distributed values. Both of these issues could be avoided by generating random floats the "canonical" way, which is to set the bits of the mantissa directly (and simply throw away left-over entropy) along the lines of:
which not only produces values strictly from [0, 1) for any value of rnd, but also makes sure that all values are equally far spaced apart leading to a perfect uniform distribution.
his paper proposes a new specification of generate_canonical, but it doesn't appear to me that this paper would address the issue of uniform_real_distribution specification.
@fsb4000 Thanks for letting me know. So, currently, by definition we are forced to break the standard because it is inconsistent. We can either
Follow the dictated implementation, breaking the math. constraints, or
Conform to the math. constraints but deviate from the dictated implementation.
Would it make sense to opt for the 2nd option? Personally, I'd prefer a math. sound implementation over anything else, seeing as we aren't gonna be standard-conform either way.
@denniskb Yes, I think the second option is good. But I'm just a random person on the Internet, let's wait for an opinion of the Microsoft employees :)
Describe the bug
cppreference.com
The below code sample demonstrates how a default-constructed
uniform_real_distribution<float>
(a=0, b=1) generates a1.0f
if provided a random number with all bits set:Command-line test case
Expected behavior
Regardless of the input received from the RNG,
uniform_real_distribution
should never produceb
as the interval is supposed to be right-exclusive. GCC passes this test.STL version
Microsoft Visual Studio Community 2019
Version 16.6.4
Additional context
The error seems to originate from
generate_canonical
, which, when passed in the above code, simplifies down to:which, mathematically speaking, should be less than 1, but in practice gets rounded to
1.0f
due to limited floating point precision.generate_canonical
is flawed in other regards: using a normalizing/scaling division leads to non-uniformly distributed values. Both of these issues could be avoided by generating random floats the "canonical" way, which is to set the bits of the mantissa directly (and simply throw away left-over entropy) along the lines of:which not only produces values strictly from [0, 1) for any value of
rnd
, but also makes sure that all values are equally far spaced apart leading to a perfect uniform distribution.Also tracked by DevCom-110322 and Microsoft-internal VSO-253526 / AB#253526 .
The text was updated successfully, but these errors were encountered: