This package provides a minimalist interface for using noise functions as an alternative to pseudorandom number generators.
Noise functions, also known as Counter-Based RNGs are another approach to traditional PRNGs, which have several advantages for parallel computation and procedural generation. While this package allows using noise functions as sequential RNGs, it is mostly focused on using them as streams of random values that can be indexed in any order at constant cost and memory overhead.
RandomNoise.jl defines an AbstractNoise
type for all noise functions. All implementations of AbstractNoise
must implement a single method noise(n,rng)
that returns the n
-th term of the sequence generated by the noise function.
julia> sqn = SquirrelNoise5()
SquirrelNoise5(0x00000000)
julia> noise(1, sqn)
0xc895cb1d
Additionally, most noise functions can be provided with a seed. Unlike PRNGs, this generates a completely different sequence, rather than jumping at a different point in the same sequence.
julia> sqn = SquirrelNoise5(42)
SquirrelNoise5(0x0000002a)
julia> noise(1, sqn)
0xd6c56929
RandomNoise.jl currently provides three noise functions
SquirrelNoise5
a 32-bit noise function, originally by Squirrel EiserlohSquirrelNoise5x2
a 64-bit noise function, consisting of two stackedSquirrelNoise5
Murmur3Noise
a 32-bit noise function based on the Murmur3 hash function
In addition, support for Random123.jl's CBRNGs is provided via immutable wrapper types. Both RandomNoise and Random123 need to be loaded to use them.
using RandomNoise
using Random123
rng = Threefry2x()
tfn = ThreefryNoise(rng)
noise(0, tfn)
The noise functions implemented in this package are more lightweight, but all fail some tests on BigCrush (see Benchmarks below). Use the ones from Random123 if you want some battle-tested generators.
Being able to generate random integers on demand is all well and good, but most of the time we'd like to use other types of random values, floating point numbers, for example. For this purpose, we provide a special type called NoiseMap
that wraps around a noise function and a transform that maps the output of the noise function to some desired values.
julia> nm = NoiseMap(SquirrelNoise5(), sin)
NoiseMap{Float64, 1, SquirrelNoise5, typeof(sin)}(SquirrelNoise5(0x00000000), sin)
The resulting object can called just like any function
julia> nm(1)
0.013299689581968843
julia> nm.(1:5)
5-element Vector{Float64}:
0.013299689581968843
0.9728924552038383
-0.9468531800274045
-0.5680157365140597
-0.03113989875496316
It can also be indexed like an array
julia> nm[1]
0.013299689581968843
By default, noise maps are "one-dimensional", but can be defined for arbitrary dimensions
julia> nm2 = NoiseMap(SquirrelNoise5(), sin, 2)
NoiseMap{Float64, 2, SquirrelNoise5, typeof(sin)}(SquirrelNoise5(0x00000000), sin)
julia> nm2(1,1)
0.013299689581968843
Beside a function, the transform can also be an instance of NoiseUniform
, a special type to generate floating point numbers uniformly distributed in the interval [0,1)
.
julia> nm3 = NoiseMap(SquirrelNoise5(), NoiseUniform{Float64}())
NoiseMap{Float64, 1, SquirrelNoise5, NoiseUniform{Float64}}(SquirrelNoise5(0x00000000), NoiseUniform{Float64}())
Finally, when passing a UnivariateDistribution
from Distributions.jl, the output values are samples from that distribution
julia> nm4 = NoiseMap(SquirrelNoise5(), Normal(0,1))
NoiseMap{Float64, 1, SquirrelNoise5, Normal{Float64}}(SquirrelNoise5(0x00000000), Normal{Float64}(μ=0.0, σ=1.0))
julia> nm4.(1:5)
5-element Vector{Float64}:
0.7841899388190731
0.5263725651428768
0.14096081624298265
0.6549392209760272
0.18955444371040095
Any noise function can be used as a sequential PRNG simply by creating a counter and incrementing it each time a new number is generated. Examples of such Counter Based RNGs (CBRNGs) can be found in the Random123.jl package.
The noise functions defined in this package can be made into CBRNGs by passing them to a NoiseRNG
object.
julia> rng = NoiseRNG(SquirrelNoise5())
NoiseRNG{UInt32, SquirrelNoise5}(SquirrelNoise5(0x00000000), 0x00000000)
julia> rand(rng)
0.08778560161590576
Note: Since Random123.jl already defines rand
for it's CBRNGs, we do not support passing them to NoiseRNG
.
This GDC 2017 talk by Squirrel Eiserloh is an excellent introduction to noise functions and their practical applications.
Traditional Pseudo Random Number Generators (PRNGs) work by repeatedly applying a map that scrambles the bits of its inputs, starting from a given seed, so that when generating a sequence of pseudorandom numbers, the n
-th term is the result of applying that map n
times to the seed.
This has several downsides. First, PRNGs need to hold a state that gets mutated every time a new number is generated, making them ill-suited to parallel workflows. Second, the generated numbers can only be computed in order, so if one wants to access a sequence of pseudorandom numbers out of order, the best way is often to just store it in memory. A third downside is that changing the seed of the generator usually just jumps to a different point in the sequence, so when using multiple seeds in parallel, it's possible for two generators to synchronize with each other.
Noise Functions/CBRNGs on the other hand, work by applying a bit-scrambling map once. That is, the n
-th number in the sequence is computed by noise(n)
.
These solve a lot of the problems with traditional PRNGs. They have no mutable state, and the sequence of numbers can be accessed out of order and at constant cost, making them ideal for parallel workflows. Furthermore, because the sequence of numbers can be accessed out of order, this removes the need to store them in memory for performance. Lastly, changing the seed of a noise function generates a completely different sequence, which means that we don't run into the problem of different generators catching up with each other.
The SquirrelNoise5
, SquirrelNoise5x2
and Murmur3Noise
were tested with the BigCrush benchmark from RNGTest. Both variants of SquirrelNoise5 fail 8 tests (Gap and MaxOft), while Murmur3Noise fails 10 (BirthdaySpacings, Gap, MaxOft and SumCollector). See benchmarks/results
for more details.
This research used resources of the "Plateforme Technologique de Calcul Intensif (PTCI)" (http://www.ptci.unamur.be) located at the University of Namur, Belgium, which is supported by the FNRS-FRFC, the Walloon Region, and the University of Namur (Conventions No. 2.5020.11, GEQ U.G006.15, 1610468, RW/GEQ2016 et U.G011.22). The PTCI is member of the "Consortium des Équipements de Calcul Intensif (CÉCI)" (http://www.ceci-hpc.be).