----------------- Here is the output: ----------------------------- name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate (GHz) 1.531 pciBusID 0000:03:00.0 Total memory: 11.90GiB Free memory: 11.75GiB 2017-12-18 21:29:52.976286: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 2017-12-18 21:29:52.976294: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 2017-12-18 21:29:52.976304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:03:00.0) 2017-12-18 21:30:26.079568: W tensorflow/core/common_runtime/bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 590.62MiB. Current allocation summary follows. 2017-12-18 21:30:26.079603: I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (256): Total Chunks: 2, Chunks in use: 0 512B allocated for chunks. 8B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. 2017-12-18 21:30:26.079614: I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (512): Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. : 2017-12-18 21:30:26.079778: I tensorflow/core/common_runtime/bfc_allocator.cc:660] Bin for 590.62MiB was 256.00MiB, Chunk State: 2017-12-18 21:30:26.079787: I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1020d600000 of size 1280 2017-12-18 21:30:26.079793: I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1020d600500 of size 256 2017-12-18 21:30:26.079799: I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1020d600600 of size 256 2017-12-18 21:30:26.079805: I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1020d600700 of size 256 : 2017-12-18 21:30:26.103179: I tensorflow/core/common_runtime/bfc_allocator.cc:702] Stats: Limit: 11984148890 InUse: 11630266368 MaxInUse: 11807180544 NumAllocs: 2566 MaxAllocSize: 3185049600 2017-12-18 21:30:26.103427: W tensorflow/core/common_runtime/bfc_allocator.cc:277] ***********_**************************************************************************************xx 2017-12-18 21:30:26.103454: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[32,240,240,84] 2017-12-18 21:30:36.103869: W tensorflow/core/common_runtime/bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 590.62MiB. Current allocation summary follows. 2017-12-18 21:30:36.103909: I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (256): Total Chunks: 2, Chunks in use: 0 512B allocated for chunks. 8B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin. : 017-12-18 21:30:36.115954: I tensorflow/core/common_runtime/bfc_allocator.cc:700] Sum Total of in-use chunks: 10.83GiB 2017-12-18 21:30:36.115965: I tensorflow/core/common_runtime/bfc_allocator.cc:702] Stats: Limit: 11984148890 InUse: 11630266368 MaxInUse: 11807180544 NumAllocs: 2566 MaxAllocSize: 3185049600 2017-12-18 21:30:36.116127: W tensorflow/core/common_runtime/bfc_allocator.cc:277] ***********_**************************************************************************************xx 2017-12-18 21:30:36.116149: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[32,240,240,84] Exception message: OOM when allocating tensor with shape[32,240,240,84] [[Node: training/SGD/gradients/zeros_257 = Fill[T=DT_FLOAT, _class=["loc:@concatenate_5/concat"], _device="/job:localhost/replica:0/task:0/gpu:0"](training/SGD/gradients/Shape_258, training/SGD/gradients/zeros_257/Const)]] [[Node: metrics/acc/Mean/_1699 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_37901_metrics/acc/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op u'training/SGD/gradients/zeros_257', defined at: File "/home/mike/bin/pycharm-community-2017.2.1/helpers/pydev/pydevd.py", line 1599, in globals = debugger.run(setup['file'], None, None, is_module) File "/home/mike/bin/pycharm-community-2017.2.1/helpers/pydev/pydevd.py", line 1026, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/home/mike/gitlab2/techcyte/python-dl/pap/apps/keras_trainer/test_densenet.py", line 49, in validation_steps=117) File "/home/mike/e1/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/home/mike/e1/lib/python2.7/site-packages/keras/engine/training.py", line 1926, in fit_generator self._make_train_function() File "/home/mike/e1/lib/python2.7/site-packages/keras/engine/training.py", line 960, in _make_train_function loss=self.total_loss) File "/home/mike/e1/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/home/mike/e1/lib/python2.7/site-packages/keras/optimizers.py", line 156, in get_updates grads = self.get_gradients(loss, params) File "/home/mike/e1/lib/python2.7/site-packages/keras/optimizers.py", line 73, in get_gradients grads = K.gradients(loss, params) File "/home/mike/e1/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2310, in gradients return tf.gradients(loss, variables, colocate_gradients_with_ops=True) File "/home/mike/e1/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 533, in gradients out_grads[i] = control_flow_ops.ZerosLikeOutsideLoop(op, i) File "/home/mike/e1/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1323, in ZerosLikeOutsideLoop return array_ops.zeros(zeros_shape, dtype=val.dtype) File "/home/mike/e1/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1404, in zeros output = fill(shape, constant(zero, dtype=dtype), name=name) File "/home/mike/e1/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1175, in fill result = _op_def_lib.apply_op("Fill", dims=dims, value=value, name=name) File "/home/mike/e1/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op op_def=op_def) File "/home/mike/e1/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/mike/e1/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[32,240,240,84] [[Node: training/SGD/gradients/zeros_257 = Fill[T=DT_FLOAT, _class=["loc:@concatenate_5/concat"], _device="/job:localhost/replica:0/task:0/gpu:0"](training/SGD/gradients/Shape_258, training/SGD/gradients/zeros_257/Const)]] [[Node: metrics/acc/Mean/_1699 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_37901_metrics/acc/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]