Bug about Dropout CUDA Kernel #68909
Labels
module: cuda
Related to torch.cuda, and CUDA support in general
module: nn
Related to torch.nn
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
In
Dropout.cu
(https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/Dropout.cu#L35)In 56 lines, you set
bool gridxvec_loop_state = 0
。When the
gridxvec_loop_state
is 0, you generate 4 rand numbers。and in other case,you want to use the last 2 rand values:Since you set loop_state as 0, it enter the
if
branch and generate 4 random numbers,you didn't change theloop_state
so you never enter theelse branch
and cannot use the last 2 rand values.In my opinion, it should be like this:
cc @albanD @mruberry @jbschlosser @walterddr @ngimel
The text was updated successfully, but these errors were encountered: