You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
numpy.uniform() includes the lower bound, so I believe that was on purpose.
The higher bound, on the other end, is excluded, so using log(1 - rng.uniform()) should work.
So, @nouiz was right, the comments in the original MRG uniform say we are supposed to exclude 0, even though we never took particular care of that.
Using 1 - uniform() would be bad for backward compatibility, so I'm not against slightly shifting the normalization for float16.
OK, so the issue does not actually happen because sample 1 got mapped to float16(0) during the normalization, but because the int sample gets masked with & 0x7fff, so a non-zero sample (because it was guaranteed not to be 0) got mapped to int 0.
see: [lasagne-users] float16 issues
Mostly the uniform generator include the lower bound 0, while it should exclude it.
This cause the normal generator to do log(0), so it return an inf.
The text was updated successfully, but these errors were encountered: