You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Patch committed to develop; underlying problem is the ".discard(uint64_t) " interface does not have
enough bits for all possible lattice calculations.
Compromised; the earlier 40 bit shift was (erroneously) conservative about the number of randoms in a quenched evolution and compromised the volume too much.
Note that 5D fields are even worse.
After our call today, we concluded 30 bits (1 billion draws per site) is enough for all except some rare quenched evolutions.
Hence I moved the develop branch to shift=30,
inserted an assert to check for overflow in the 64 bit
I have found a bug which appears for large lattices when using a fast-discard RNG.
In
lib/lattice/Lattice_rng.h
(lines 149-167) we have:If
site
>= 2^24 thenskip<<40
gives an overflow, so the (2^24+n)th site gets an identical random sequence to the nth site.The text was updated successfully, but these errors were encountered: