Skip to content

torch.manual_seed leaks memory #55768

@sniklaus

Description

@sniklaus

🐛 Bug

Calling torch.manual_seed(...) (or torch.cuda.manual_seed_all(...)) leaks memory, quite a bit in fact. I am resetting the seed in my data loader for each sample and over the course of a few hours there are tens of gigabytes that are being leaked.

To Reproduce

for i in range(1000000000): torch.manual_seed(i)

And watch the memory consumption of the process slowly creep up.

Expected behavior

The memory consumption to remain constant, as is the case with for i in range(1000000000): random.seed(i).

Environment

PyTorch version: 1.8.1+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1 
CMake version: version 3.19.6

Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce GTX 1650
Nvidia driver version: 460.39
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A

cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @pbelevich

Metadata

Metadata

Assignees

Labels

high prioritymodule: memory usagePyTorch is using more memory than it should, or it is leaking memorymodule: randomRelated to random number generation in PyTorch (rng generator)triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions