-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM on GTX 1080 #10
Comments
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally W tensorflow/core/common_runtime/bfc_allocator.cc:270] ******************************************_______xxx Caused by op u'gradients/transpose_grad/transpose', defined at: ...which was originally created as op u'transpose', defined at: ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[20,65536,64] |
What batch size are you using? |
Currently using 20 python style.py --style examples/style/the_scream.jpg --checkpoint-dir saver --test examples/content/thecity.jpg --test-dir test --content-weight 1.5e1 --checkpoint-iterations 1000 --batch-size 20 |
Good callout! My bad I copied and pasted from the readme, should have taken a closer look. Modified to smaller batch and seems to be working. Sorry 'bout that. |
Yeah, the batch size of 20 fits on the GTX Titan X, which has 12 GB of VRAM, but the GTX 1080 has 8 GB I think. A batch size of ~ 13 is probably the largest that will work. |
Similar to the issue #9 I'm hitting a ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[20,65536,64]
This is with GTX1080 card (8gb, 7.4GB available to TF) Cuda 8, CUDNN 5. Tryed training with a smaller image (100k and a small style 50k), will add logs below.
The text was updated successfully, but these errors were encountered: