Allocating all memory, CUDA OOM #64284
Labels
comp:gpu
GPU related issues
stale
This label marks the issue/pr stale - to be closed automatically if no activity
stat:awaiting response
Status - Awaiting response from author
TF 1.13
Issues related to TF 1.13
type:bug
Bug
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
source
TensorFlow version
1.13, 1.10
Custom code
Yes
OS platform and distribution
Linux, Ubuntu 20
Mobile device
No response
Python version
3.8
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
11.7
GPU model and memory
V100, 34GB
Current behavior?
TF tries to allocate ALL memory despite not calling any functions that should put any data on the GPU.
Standalone code to reproduce the issue
Relevant log output
The text was updated successfully, but these errors were encountered: