You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently there's a single random number generator shared between all GPUs. The way it is set up seems to indicate that we really want one generator per GPU, but that's not what's happening:
In luaopen_libcutorch we call THCudaInit which itself calls THRandom_manualSeed for each GPU, each time creating a new generator and replacing the old one. We then call THCRandom_seed which again replaces the previously created generator with a new one.
Before #45 we'd only call THCRandom_manualSeed in cutorch_setDevice but that seems equally wrong.
If we want one generator per GPU we should create N generators at initialization and then pick the one for the currently chosen device when generating random numbers. Any thoughts?
The text was updated successfully, but these errors were encountered:
Currently there's a single random number generator shared between all GPUs. The way it is set up seems to indicate that we really want one generator per GPU, but that's not what's happening:
In
luaopen_libcutorch
we callTHCudaInit
which itself callsTHRandom_manualSeed
for each GPU, each time creating a new generator and replacing the old one. We then callTHCRandom_seed
which again replaces the previously created generator with a new one.Before #45 we'd only call
THCRandom_manualSeed
incutorch_setDevice
but that seems equally wrong.If we want one generator per GPU we should create N generators at initialization and then pick the one for the currently chosen device when generating random numbers. Any thoughts?
The text was updated successfully, but these errors were encountered: