-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Description
🐛 Describe the bug
Using torch.randperm
with a custom RNG modifies the state of the default RNG.
import torch
seed = 42
torch.manual_seed(seed)
torch.randperm(2)
print(torch.randn(1))
torch.manual_seed(seed)
torch.randperm(2, generator=torch.Generator().manual_seed(seed))
print(torch.randn(1))
Output:
tensor([-0.6382])
tensor([0.3367])
Expected output (if I understand the idea of separate RNGs correctly) should be:
tensor([-0.6382])
tensor([-0.6382])
The problem does not appear when calling randperm with n=1.
Versions
PyTorch version: 2.2.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.18 (main, Feb 18 2024, 18:14:22) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)
Python platform: macOS-14.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.0
[pip3] torchaudio==2.2.0
[pip3] torchvision==0.17.0
[conda] Could not collect