-
Notifications
You must be signed in to change notification settings - Fork 45.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tcmalloc: large alloc on Colab and Tensorflow killed on local machine due to over consumption of RAM #7652
Comments
Hello. You can find hdf5 generators on my github account. Please check them, use them and let me know if you are still having problems. |
Hello, I get the tcmalloc error very often when trying to run the code on colab from python files ( say train.py ) but the same code(content of train.py copied to cell) when run from the cell gives no such error.I would like to know the cause behind such a behaviour. |
Is this still an issue?.Please, close this thread if your issue was resolved.Thanks! |
@ravikyram Yes. This is still the same issue |
Please, let us know which pretrained model you are using and share related code .Thanks! |
For example this issue still persists when i try to run https://github.com/dorarad/gansformer this model. |
System information
I ran the following code in an ipython notebook in both my local machine (local GPU) and Google Colab :
Describe the problem
The tensorflow API always tries to consume the maximum RAM even when I have a GPU and the kernel gets killed while training my deep learning algorithm. I referred online on multiple sources (1, 2, 3, 4, 5, 6) and tried the following things :
However, none of these suggestions helped to solve the problem.
Source code / logs
The error log is very long and hence I am attaching it in a separate text file here :
ERROR_LOG.txt
The text was updated successfully, but these errors were encountered: