-
Notifications
You must be signed in to change notification settings - Fork 258
Open
Labels
enhancementNew feature or requestNew feature or request
Description
While it's a joy to use CUDA.jl for GPU computing with Julia, I find the documentation for working with random numbers a bit confusing and lacking, in particular, on how to draw reproducible random numbers. Here's the list of specific points which can be improved. I am willing to submit a PR.
rand!(array)orrandn!(array)will useCURANDby default and notCUDA.default_rng()but it is not mentioned in the docs in Array programming.- While launching a kernel, a unique seed is passed from host to the
CUDA.default_rng()to ensure kernels draw different random numbers on multiple invocations. This is documented, but it usesRandom.default_rng()to make seed, which means that one way to have reproducible results is to seed theTaskLocalRNG(), which I found amusing. - Manual seeding is possible,
seed!()but it seeds bothCUDA.default_rng()andCURAND.default_rng()at the same time? - I can produce reproducible random number arrays by manually seeding
CUDA.default_rng()as
rng = CUDA.default_rng()
seed!(rng, 12345)
randn(rng, 10)but inside kernels it does not work (because it is re-seeeded?). So
function _kernel(p)
i = threadIdx().x
p[i] = randn()
return nothing
end
Random.seed!(rng, 12345)
@cuda threads=length(p) _kernel(p)doesn't give the same random numbers. However, if I seed the TaskLocalRNG(), it does.
Random.seed!(Random.default_rng(), 12345)
@cuda threads=length(p) _kernel(p)- Similarly this should give same random numbers using CURAND, but it does not?
CUDA.seed!(12345)
randn(4)- It's unclear if it is possible to use CURAND backed rngs inside kernels.
I went through the following discourse post:
https://discourse.julialang.org/t/kernel-random-numbers-generation-entropy-randomness-issues/105637/4
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request