-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: code is too big #125
Comments
how about reducing batch size to 32 |
I tried reducing batch size. I even tried batch size 8 and the problem was still there. |
Hi!
|
Same problem here with same dataset, batch_size 24, segment_len 16000 using V100 |
When I use torch-1.0.1.post2, I got the same error. |
Use pytorch 1.0.0 and don't set batch_size to 1. |
There seems to be some CPU memory issue.
|
Tried this, but got |
Maybe this is useful : pytorch/pytorch#24174 Thanks! |
Try use the latest pytorch-1.4.0. "conda install pytorch" |
|
Should this still be an issue? I'm running Pytorch 1.5 and it still happens. Reducing the num_workers does reduce the amount of memory required significantly, but this is not an ideal solution. @rafaelvalle 's solution still works, but it is a bit of a hack. Is it possible to instead calculate the stft/mel_spec using librosa? I quickly implemented it but the two functions gave different results, and I don't have the time currently to investigate why. |
I got that too, do you know how to resolve this now? |
What the error means is that the data you're passing in to the function is residing on the GPU, while the model itself is still on the CPU. If you're getting the error around the code @rafaelvalle shared then most likely you forgot the second .cuda() command. Note there are three things happening in that snippet:
|
oh, I see, thank you so much!! |
Hi!
I'm having a problem with getting a RuntimeError saying that the code is too big. I'm using the LJSpeech dataset and I'm trying to train my own model. The machine has GTX1080 Ti GPU. I installed Pytorch 1.0 with CUDA 10.0. Here's the traceback:
The text was updated successfully, but these errors were encountered: