-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
random number generators not identical across accelerators #390
Comments
In alpaka the generators are already seperated from the distribution.
Points 1, 2 and 5 can be solved, but point 4 depends on point 3 which may be very hard. Edit: point 5 has been solved. |
Thanks for the summary. Point 3 is one reason why I opened this pull request. I think it is not possible or to hard to maintain that we have all Generators on all platforms. One idea is to create something like a factory where the user can set properties like quality, performance and memory usage and gets back the type of the best fitting generator. If a platform has only implemented one algorithm than there will be always the same generator returned. |
Such a factory might be the only viable option. It might be hard to find the correct properties to describe the generators. |
Admittedly, a typical PIConGPU 0.4.0-dev simulation on Tesla P100 currently uses (wastes) 18-25% of its main memory (3 our of 12/16 GByte) just to the RNG state. Can we do anything to allow backend-specific RNGs like the one we had before ComputationalRadiationPhysics/picongpu#2226 again (~50% mem footprint)? It would be totally fine if that RNG is only usable on a specific backend (e.g. via a less-specific wrapper/factory as above) and an other implementation (and API) is used on other backends. |
cross-linking ComputationalRadiationPhysics/picongpu#2410 as @psychocoderHPC work-arounds back the |
enum class
Generator {
Default,
MersenneTwister
// , ...
};
// ...
auto genMersenneTwister = alpaka::rand::generator::create<
alpaka::rand::Generator::MersenneTwister
>(
acc,
12345u,
6789u
); |
We discussed this in today's meeting. @sliwowitz is currently working on a separate RNG library on top of alpaka that will adress this issue. This is therefore WONTFIX and will be closed once the new RNG library is public. |
Currently the generation method for random numbers for a accelerator is fixed defined within alpaka.
To provide different generators per accelerator depending of the users needs we should think about an interface change.
Why we need different generators:
e.g. PIConGPU provides different methods up to my pull request to use the native alpaka generator which removes the possibility for the user to control the quality of the RNG generator.
The text was updated successfully, but these errors were encountered: