Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning while creating Session on Mac OS X: can't determine number of CPU cores #27

Closed
delip opened this issue Nov 9, 2015 · 4 comments

Comments

@delip
Copy link

delip commented Nov 9, 2015

While creating a Session, I get this warning (using pre-built whl on OSX). Any ideas?

s = tf.Session()
can't determine number of CPU cores: assuming 4
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 4
can't determine number of CPU cores: assuming 4
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 4
@vrv
Copy link

vrv commented Nov 9, 2015

Hi delip, we fixed this in 430a054 -- when we mint our next version of the pip binaries, this should be fixed. Thanks for your report!

@vrv vrv changed the title Warning while creating Session Warning while creating Session on Mac OS X: can't determine number of CPU cores Nov 9, 2015
@delip
Copy link
Author

delip commented Nov 9, 2015

Excellent!

@gynsolomon
Copy link

thanks!

@delip
Copy link
Author

delip commented Nov 17, 2015

Just want to confirm that this warning went away with the latest build. I will be closing this. Thanks!

@delip delip closed this as completed Nov 17, 2015
ilblackdragon added a commit to ilblackdragon/tensorflow that referenced this issue Mar 9, 2016
benoitsteiner pushed a commit to benoitsteiner/tensorflow that referenced this issue May 22, 2017
…ase of LIBXSMM_DNN_WARN_FALLBACK, use libxsmm_hash instead of std::hash, code cleanup (tensorflow#27)

* Fixed AVX-512 intrinsic implementation.

* OR'ed LIBXSMM_DNN_CONV_OPTION_OVERWRITE into convolution options, which folds zeroing the input buffer on first use. This removes the call to libxsmm_dnn_zero_buffer in case of LIBXSMM_DNN_COMPUTE_KIND_FWD.

* Rely on libxsmm_hash rather than std::hash. Brought xsmm_conv2d.cc up-to-date with TF/master.

* Code cleanup: use LIBXSMM_DNN_CONV_OPTION_WU_EXT_FILTER_REDUCE_OVERWRITE rather than assembling the option from separate flags.

* Avoid to destroy the handle in case of LIBXSMM_DNN_WARN_FALLBACK since the next iteration may double-delete the same handle. One would need to update the handle-cache to allow destruction at this place. However, all handles are destructed when TF terminates (cache cleanup).
benoitsteiner pushed a commit to benoitsteiner/tensorflow that referenced this issue May 22, 2017
…ase of LIBXSMM_DNN_WARN_FALLBACK, use libxsmm_hash instead of std::hash, code cleanup (tensorflow#27)

* Fixed AVX-512 intrinsic implementation.

* OR'ed LIBXSMM_DNN_CONV_OPTION_OVERWRITE into convolution options, which folds zeroing the input buffer on first use. This removes the call to libxsmm_dnn_zero_buffer in case of LIBXSMM_DNN_COMPUTE_KIND_FWD.

* Rely on libxsmm_hash rather than std::hash. Brought xsmm_conv2d.cc up-to-date with TF/master.

* Code cleanup: use LIBXSMM_DNN_CONV_OPTION_WU_EXT_FILTER_REDUCE_OVERWRITE rather than assembling the option from separate flags.

* Avoid to destroy the handle in case of LIBXSMM_DNN_WARN_FALLBACK since the next iteration may double-delete the same handle. One would need to update the handle-cache to allow destruction at this place. However, all handles are destructed when TF terminates (cache cleanup).
benoitsteiner added a commit that referenced this issue May 22, 2017
* Fixed AVX-512 intrinsic layer (sparse_matmul_op.h). Incorporated LIBXSMM_DNN_CONV_OPTION_OVERWRITE. (#26)

* Fixed AVX-512 intrinsic implementation.

* OR'ed LIBXSMM_DNN_CONV_OPTION_OVERWRITE into convolution options, which folds zeroing the input buffer on first use. This removes the call to libxsmm_dnn_zero_buffer in case of LIBXSMM_DNN_COMPUTE_KIND_FWD.

* Made xsmm_conv2d.cc up-to-date with TF/master, avoid double-free in case of LIBXSMM_DNN_WARN_FALLBACK, use libxsmm_hash instead of std::hash, code cleanup (#27)

* Fixed AVX-512 intrinsic implementation.

* OR'ed LIBXSMM_DNN_CONV_OPTION_OVERWRITE into convolution options, which folds zeroing the input buffer on first use. This removes the call to libxsmm_dnn_zero_buffer in case of LIBXSMM_DNN_COMPUTE_KIND_FWD.

* Rely on libxsmm_hash rather than std::hash. Brought xsmm_conv2d.cc up-to-date with TF/master.

* Code cleanup: use LIBXSMM_DNN_CONV_OPTION_WU_EXT_FILTER_REDUCE_OVERWRITE rather than assembling the option from separate flags.

* Avoid to destroy the handle in case of LIBXSMM_DNN_WARN_FALLBACK since the next iteration may double-delete the same handle. One would need to update the handle-cache to allow destruction at this place. However, all handles are destructed when TF terminates (cache cleanup).

* Configure LIBXSMM with default arguments (#28)

* Fixed AVX-512 intrinsic implementation.

* OR'ed LIBXSMM_DNN_CONV_OPTION_OVERWRITE into convolution options, which folds zeroing the input buffer on first use. This removes the call to libxsmm_dnn_zero_buffer in case of LIBXSMM_DNN_COMPUTE_KIND_FWD.

* Rely on libxsmm_hash rather than std::hash. Brought xsmm_conv2d.cc up-to-date with TF/master.

* Code cleanup: use LIBXSMM_DNN_CONV_OPTION_WU_EXT_FILTER_REDUCE_OVERWRITE rather than assembling the option from separate flags.

* Avoid to destroy the handle in case of LIBXSMM_DNN_WARN_FALLBACK since the next iteration may double-delete the same handle. One would need to update the handle-cache to allow destruction at this place. However, all handles are destructed when TF terminates (cache cleanup).

* Rely on default configuration arguments, and thereby lower the dependence from LIBXSMM internals.
benoitsteiner pushed a commit to benoitsteiner/tensorflow that referenced this issue Jun 15, 2017
…ase of LIBXSMM_DNN_WARN_FALLBACK, use libxsmm_hash instead of std::hash, code cleanup (tensorflow#27)

* Fixed AVX-512 intrinsic implementation.

* OR'ed LIBXSMM_DNN_CONV_OPTION_OVERWRITE into convolution options, which folds zeroing the input buffer on first use. This removes the call to libxsmm_dnn_zero_buffer in case of LIBXSMM_DNN_COMPUTE_KIND_FWD.

* Rely on libxsmm_hash rather than std::hash. Brought xsmm_conv2d.cc up-to-date with TF/master.

* Code cleanup: use LIBXSMM_DNN_CONV_OPTION_WU_EXT_FILTER_REDUCE_OVERWRITE rather than assembling the option from separate flags.

* Avoid to destroy the handle in case of LIBXSMM_DNN_WARN_FALLBACK since the next iteration may double-delete the same handle. One would need to update the handle-cache to allow destruction at this place. However, all handles are destructed when TF terminates (cache cleanup).
tarasglek pushed a commit to tarasglek/tensorflow that referenced this issue Jun 20, 2017
lukeiwanski referenced this issue in codeplaysoftware/tensorflow Oct 26, 2017
* [OpenCL] Registers ApplyMomentum

* Deleting code

* Fixed errors (still runs on CPU)

* Changed implementation of ApplyMomentum
whchung referenced this issue in ROCm/tensorflow-upstream Apr 10, 2018
hfp added a commit to hfp/tensorflow that referenced this issue Jan 4, 2019
…ase of LIBXSMM_DNN_WARN_FALLBACK, use libxsmm_hash instead of std::hash, code cleanup (tensorflow#27)

* Fixed AVX-512 intrinsic implementation.

* OR'ed LIBXSMM_DNN_CONV_OPTION_OVERWRITE into convolution options, which folds zeroing the input buffer on first use. This removes the call to libxsmm_dnn_zero_buffer in case of LIBXSMM_DNN_COMPUTE_KIND_FWD.

* Rely on libxsmm_hash rather than std::hash. Brought xsmm_conv2d.cc up-to-date with TF/master.

* Code cleanup: use LIBXSMM_DNN_CONV_OPTION_WU_EXT_FILTER_REDUCE_OVERWRITE rather than assembling the option from separate flags.

* Avoid to destroy the handle in case of LIBXSMM_DNN_WARN_FALLBACK since the next iteration may double-delete the same handle. One would need to update the handle-cache to allow destruction at this place. However, all handles are destructed when TF terminates (cache cleanup).
eggonlea pushed a commit to eggonlea/tensorflow that referenced this issue Mar 12, 2019
Add Boost.Locale as a dependency for the build
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants