Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
CUDA_ERROR_OUT_OF_MEMORY: out of memory with RTX 2070 #25337
Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template
Describe the problem
After applying this code the error output is attached below in "Any other info / logs" section.
Provide the exact sequence of commands / steps that you executed before running into the problem
Any other info / logs
Using TensorFlow backend.
Has this problem been solved ?
I am facing exactly same problem with
running on python virtual environment
I am not running heavy model , I am running simple code to see if gpu is being loaded and used , but I got many lines of error all looks like the following line :
I restart pycharm and my system but there was no change. Any help please ?
I uninstalled all related packages and libraries, uninstalled graphics driver with DDU and reinstalled them again. I don’t know why it works but my best guess is it has something to do with the graphics driver. When I upgrade to 2070 super I also have to reinstall the driver to make it work.
I think that it happens because of properties of rtx graphic card. a certain portion of rtx 20xx graphic memory (2.9Gb of 7994Mb in rtx 2070s) is only available when using float16 data type in tensorflow. if you allocate whole graphic card memory, you must use two data types float32, float16.
opt = tf.keras.optimizers.Adam(1e-4)