Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

W tensorflow/core/framework/allocator.cc:107] Allocation of 1200000000 exceeds 10% of system memory. #3

Open
HuangCongQing opened this issue Aug 8, 2019 · 2 comments

Comments

@HuangCongQing
Copy link
Owner

image

@HuangCongQing
Copy link
Owner Author

batch_size过大,调小设定的 batch_size 即可

https://blog.csdn.net/qq_38633187/article/details/88778515

@HuangCongQing
Copy link
Owner Author

HuangCongQing commented Aug 9, 2019

运行卡死


Epoch 1/20
2019-08-09 20:43:34.476604: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1800065000 Hz
2019-08-09 20:43:34.477186: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x558815ab0680 executing computations on platform Host. Devices:
2019-08-09 20:43:34.477296: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-08-09 20:44:05.509642: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant