Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model not running on GPU & out of memory error #9

Open
jpcompartir opened this issue Jan 4, 2022 · 0 comments
Open

Model not running on GPU & out of memory error #9

jpcompartir opened this issue Jan 4, 2022 · 0 comments

Comments

@jpcompartir
Copy link

Hi,

Thanks so much for this repo and please forgive me if this is trivial, I've been trying for a little while now to run the model on Google Colab. I'm running into two separate issues, which I think may be linked. The first, is that if I load the model in a GPU runtime the model defaults to the 'cpu'.

After running:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device)

There's an error when trying to run goemotions(texts):
"RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select".

Second, when trying to run goemotions over more than a few thousand rows on Colab in a high-RAM runtime environment, I run into an out of memory error. I'm wondering if this is a problem with batching in the data loader? I'll be looking for solutions & hope to close this issue myself, but in the meantime any help is much appreciated, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant