Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'RuntimeError: CUDA out of memory' on P100 #11

Closed
HalleyStarbun opened this issue Oct 23, 2021 · 2 comments
Closed

'RuntimeError: CUDA out of memory' on P100 #11

HalleyStarbun opened this issue Oct 23, 2021 · 2 comments

Comments

@HalleyStarbun
Copy link

Hi there,

I'm using Colab Pro to do some ML experiments and often fail to initialise the RN50x4 CLIP model. Sometimes I even have trouble getting the RN101 model to load up. RN50x16 has never worked in my experience. The notebook is running on a P100 GPU and mentions that "x4 and x16 models for CLIP may not work reliably on lower-memory machines".

I'm just wondering if I need an even more capable GPU (in terms of VRAM) or if there is some problem with the code? I'm not an expert with Tensorflow/ML so apologies if there's a simple solution to this.

@somewheresy
Copy link
Owner

Hey there,

Both RN50x4 and the RN50x16 model are too big to run reliably on Google Colab's available runtimes. This is an option for people who have their own rigs with more RAM.

@HalleyStarbun
Copy link
Author

Ah, sorry. The wording made it sound like it was possible on Colab. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants