-
Notifications
You must be signed in to change notification settings - Fork 543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MemoryError #2
Comments
How much memory do you have available? This model requires loading 2 large .npy files (utable and btable) both of which are > 2GB. These consist of almost 1 million word embeddings (each of which are 620 dimensional, single precision floats) as part of the skip-thoughts model. Given that these embeddings are only used for MS COCO training sentences, it's possible I can create a reduced vocabulary version, where only ~30K words are necessary instead of 1M. I'll look into this. |
I will test this now. Currently I am starting the test with 8GB of RAM free |
I haven't done anything different yet. I'll update once I have, probably next week (I've got a paper deadline this week) |
Hi, ryankiros, We have also met the memmory error: In [2]: z = generate.load_all() We are running on a machine with 8Gb, but it failed. Thanks. |
I seem to be getting a
MemoryError
when trying to generate a story and im not quite sure how to fix it.The code that i am using to get this is
Any help?
The text was updated successfully, but these errors were encountered: