-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
resource exhausted error #10
Comments
2018-02-21 01:03:53.047199: W tensorflow/core/common_runtime/bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.49GiB. Current allocation summary follows. |
try running with |
Line 44 in 4f2e3d9
hで入ってくるのって (keysize, batchsize)だよね.batchってlast axisに沿って拡張されるよね. するとh.shape[0]でtileするのっておかしい気がするんだけど |
メモリサイズ考えると |
while_loopはloopと言いつつ自動的に並列化してくれるはず |
なるほどね.while_loopは何に対してiterationを掛けるべきか. あと普通にbroadcastingがexplicitにtileしてやるよりメモリを節約できるらしいが, got Dst tensor is not initialized. |
これbroadcastingできるの? |
perhaps.今夜試してみる. |
I hope transferring data between GPUs doesn't take long time. |
Converting sparse IndexSlices to a dense Tensor unknown shape. This may consume a large amount of memory. Referenceshttps://stackoverflow.com/questions/35892412/tensorflow-dense-gradient-explanation Now running with half tile half broadcasted.
|
I'm unable to run the model on any environment but CartPole due to ResourceExhausted errors. Any tips? |
@jlindsey15 Thank you for the comment! If you have multiple gpus, splitting DNDs into each device will solve the issue. |
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[32,24487,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: deepq/DND/lookup/Tile = Tile[T=DT_FLOAT, Tmultiples=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](deepq/DND/lookup/Tile/input, deepq/DND/lookup/Tile/multiples)]]
The text was updated successfully, but these errors were encountered: