Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when bazel building after pull down newest master branch #14610

Closed
ghost opened this issue Nov 16, 2017 · 21 comments

Comments

@ghost
Copy link

commented Nov 16, 2017

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): 1.3.0
  • Python version: 2.7.12
  • Bazel version (if compiling from source): 0.5.4
  • CUDA/cuDNN version: 8.0.61
  • GPU model and memory: NVIDIA Corporation Device 1b06
  • Exact command to reproduce:
bazel build tensorflow/python/tools:freeze_graph

Describe the problem

Here is the warning and error message I got :

WARNING: tensorflow/tensorflow/core/BUILD:1801:1: in includes attribute of cc_library rule //tensorflow/core:framework_headers_lib: '../../external/nsync/public' resolves to 'external/nsync/public' not below the relative path of its package 'tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error might have been caused by the macro implementation in tensorflow/tensorflow/tensorflow.bzl:1108:30
WARNING: tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': No longer supported. Switch to SavedModel immediately.
WARNING: /home/simonlee/Work/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': No longer supported. Switch to SavedModel immediately.
INFO: Analysed target //tensorflow/python/tools:freeze_graph (0 packages loaded).
INFO: Found 1 target...
ERROR: tensorflow/tensorflow/contrib/lite/toco/BUILD:158:1: C++ compilation of rule '//tensorflow/contrib/lite/toco:graph_transformations' failed (Exit 1)
In file included from external/gemmlowp/public/../internal/dispatch_gemm_shape.h:20:0,
                 from external/gemmlowp/public/gemmlowp.h:19,
                 from ./tensorflow/contrib/lite/kernels/internal/common.h:48,
                 from ./tensorflow/contrib/lite/toco/runtime/types.h:18,
                 from ./tensorflow/contrib/lite/toco/model.h:25,
                 from ./tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.h:23,
                 from tensorflow/contrib/lite/toco/graph_transformations/identify_l2_pool.cc:20:
external/gemmlowp/public/../internal/../internal/kernel_default.h:88:2: error: #error "SIMD not enabled, you'd be getting a slow software fallback. Consider enabling SIMD extensions (for example using -msse4 if you're on modern x86). If that's not an option, and you would like to continue with the slow fallback, define GEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK."
 #error \
  ^
Target //tensorflow/python/tools:freeze_graph failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 1.120s, Critical Path: 0.85s
FAILED: Build did NOT complete successfully

@ghost ghost changed the title Error when running bazel after pull down lite Error when running bazel after pull down newest master branch Nov 16, 2017

@ghost ghost closed this Nov 16, 2017

@ghost ghost reopened this Nov 16, 2017

@ghost ghost changed the title Error when running bazel after pull down newest master branch Error when bazel building after pull down newest master branch Nov 16, 2017

@aselle aselle added the comp:lite label Nov 16, 2017

@firewu

This comment has been minimized.

Copy link

commented Nov 27, 2017

I get the same error when bazel building,do you know how to deal with it,thks!
ERROR: /home/deeplearn/.cache/bazel/_bazel_deeplearn/bf1e87bfe06d5731809039dca55f14ae/external/org_tensorflow/tensorflow/contrib/lite/toco/BUILD:174:1: C++ compilation of rule '@org_tensorflow//tensorflow/contrib/lite/toco:graph_transformations' failed (Exit 1).
In file included from external/gemmlowp/public/../internal/dispatch_gemm_shape.h:20:0,
from external/gemmlowp/public/gemmlowp.h:19,
from external/org_tensorflow/tensorflow/contrib/lite/kernels/internal/common.h:48,
from external/org_tensorflow/tensorflow/contrib/lite/toco/runtime/types.h:18,
from external/org_tensorflow/tensorflow/contrib/lite/toco/model.h:25,
from external/org_tensorflow/tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.h:23,
from external/org_tensorflow/tensorflow/contrib/lite/toco/graph_transformations/remove_tensorflow_assert.cc:19:
external/gemmlowp/public/../internal/../internal/kernel_default.h:88:2: error: #error "SIMD not enabled, you'd be getting a slow software fallback. Consider enabling SIMD extensions (for example using -msse4 if you're on modern x86). If that's not an option, and you would like to continue with the slow fallback, define GEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK."
#error
^
____Building complete.
____Elapsed time: 135.971s, Critical Path: 37.49s

@ghost

This comment has been minimized.

Copy link
Author

commented Nov 28, 2017

Sorry , I didn't solve it , does it still remain on newest branch ?

@Interstella12

This comment has been minimized.

Copy link

commented Nov 28, 2017

i got the same error

@firewu

This comment has been minimized.

Copy link

commented Nov 28, 2017

I fellow the part of “Optimized build” on the page of https://www.tensorflow.org/serving/setup,
and added msse4 options(Although mine is on modern x86_64) when building and testing,it seems work,but maybe not the best solution(maybe define GEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK in the source code when x86_64).
1.> nohup.out && nohup bazel build -c opt --copt=-msse4.1 --copt=-msse4.2 tensorflow_serving/... &
2.bazel test -c opt --copt=-msse4.1 --copt=-msse4.2 tensorflow_serving/...

@offbye

This comment has been minimized.

Copy link

commented Nov 28, 2017

I have try below command , and it passed!

bazel build -c opt --copt=-msse4.1 --copt=-msse4.2 tensorflow/python/tools:freeze_graph

@ghost

This comment has been minimized.

Copy link
Author

commented Nov 29, 2017

Sorry for my late reply .
I try the latest master branch and do not modify anything , all bazel build is going very well .
Do you use the latest branch ?

@aselle

This comment has been minimized.

Copy link
Member

commented Nov 29, 2017

Are you building on a Intel machine or are you building on say a Jetson host device or an ARM host device?

@XiaoSX

This comment has been minimized.

Copy link

commented Dec 1, 2017

I get the same error when bazel building........
i changed the python environment from python2 to python3, it shows this error. is there any help

@ghost

This comment has been minimized.

Copy link
Author

commented Dec 3, 2017

@aselle I am using intel machine , and I do nothing special when bazel building , everything is going well now.

@XiaoSX
Do you try command provided by @offbye
And what machine do you use ?

@XiaoSX

This comment has been minimized.

Copy link

commented Dec 8, 2017

@Sixigma ,hi, i am using intel machine, too. I have tried the command above, but there is still something wrong. It looks like some run-time cpp files, showed lines "from StringIO import StringIO" , py3 has no that module, but could not be edited.

@visshvesh

This comment has been minimized.

Copy link

commented Dec 8, 2017

Getting the same error while building tensorflow servings on docker.....
Any help Please.....
Thanks in advance

ERROR: '@org_tensorflow//tensorflow/contrib/lite/toco:graph_transformations' failed (Exit 1).
In file included from external/gemmlowp/public/../internal/dispatch_gemm_shape.h:20:0,
from external/gemmlowp/public/gemmlowp.h:19,
from external/org_tensorflow/tensorflow/contrib/lite/kernels/internal/common.h:48,
from external/org_tensorflow/tensorflow/contrib/lite/toco/runtime/types.h:18,
from external/org_tensorflow/tensorflow/contrib/lite/toco/model.h:25,
from external/org_tensorflow/tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.h:23,
from external/org_tensorflow/tensorflow/contrib/lite/toco/graph_transformations/create_im2col_arrays.cc:21:
external/gemmlowp/public/../internal/../internal/kernel_default.h:88:2: error: #error "SIMD not enabled, you'd be getting a slow software fallback. Consider enabling SIMD extensions (for example using -msse4 if you're on modern x86). If that's not an option, and you would like to continue with the slow fallback, define GEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK."
#error
^
INFO: Elapsed time: 14.932s, Critical Path: 9.05s

@gauravkaila

This comment has been minimized.

Copy link

commented Dec 8, 2017

@visshvesh , you can try compiling it by using the following cmd

bazel build -c opt --copt=-msse4.1 --copt=-msse4.2 tensorflow_serving/...

I have also made a docker image compiled with tf-serving (CPU) and tf-serving(GPU). You can use them by pulling the image from the docker hub,

CPU:
docker pull gauravkaila/tf_serving_cpu

GPU:
docker pull gauravkaila/tf_serving_gpu

@jda91

This comment has been minimized.

Copy link

commented Dec 9, 2017

@gauravkaila hey, I tried compiling with that but I always get the same error when trying to test the running model, in fact I get the error trying to run anything in the container. I had to export the model myself by adding savedmodelbuilder code into the retrain script for inception. I pulled your image and am again running into the same error:

root@42fd2c27af9d:/serving# bazel-bin/tensorflow_serving/example/inception_client --server=localhost:9000 --image=./Xiang_Xiang_panda.jpg
Traceback (most recent call last):
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/tf_serving/tensorflow_serving/example/inception_client.py", line 56, in <module>
    tf.app.run()
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 129, in run
    _sys.exit(main(argv))
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/tf_serving/tensorflow_serving/example/inception_client.py", line 50, in main
    tf.contrib.util.make_tensor_proto(data, shape=[1]))
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/python/util/lazy_loader.py", line 53, in __getattr__
    module = self._load()
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/python/util/lazy_loader.py", line 42, in _load
    module = importlib.import_module(self.__name__)
  File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/contrib/__init__.py", line 81, in <module>
    from tensorflow.contrib.eager.python import tfe as eager
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/contrib/eager/python/tfe.py", line 75, in <module>
    from tensorflow.contrib.eager.python.datasets import Iterator
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/contrib/eager/python/datasets.py", line 23, in <module>
    from tensorflow.contrib.data.python.ops import prefetching_ops
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/contrib/data/python/ops/prefetching_ops.py", line 25, in <module>
    resource_loader.get_path_to_datafile("../../_prefetching_ops.so"))
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/contrib/util/loader.py", line 55, in load_op_library
    ret = load_library.load_op_library(path)
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/python/framework/load_library.py", line 56, in load_op_library
    lib_handle = py_tf.TF_LoadLibrary(library_filename, status)
  File "/serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: /serving/bazel-bin/tensorflow_serving/example/inception_client.runfiles/org_tensorflow/tensorflow/contrib/data/python/ops/../../_prefetching_ops.so: undefined symbol: _ZN6google8protobuf8internal26fixed_address_empty_stringB5cxx11E

When I check the running log I can see that my model is running:


2017-12-09 01:24:00.397485: I tensorflow_serving/model_servers/main.cc:147] Building single TensorFlow model file config:  model_name: inception model_base_path: /tmp/new5
2017-12-09 01:24:00.397670: I tensorflow_serving/model_servers/server_core.cc:439] Adding/updating models.
2017-12-09 01:24:00.397696: I tensorflow_serving/model_servers/server_core.cc:490]  (Re-)adding model: inception
2017-12-09 01:24:00.498119: I tensorflow_serving/core/basic_manager.cc:705] Successfully reserved resources to load servable {name: inception version: 1}
2017-12-09 01:24:00.498154: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: inception version: 1}
2017-12-09 01:24:00.498169: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: inception version: 1}
2017-12-09 01:24:00.498189: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /tmp/new5/1
2017-12-09 01:24:00.498203: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:236] Loading SavedModel from: /tmp/new5/1
2017-12-09 01:24:00.623487: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 FMA
2017-12-09 01:24:00.743901: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:155] Restoring SavedModel bundle.
2017-12-09 01:24:00.798587: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:190] Running LegacyInitOp on SavedModel bundle.
2017-12-09 01:24:00.805405: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: success. Took 307196 microseconds.
2017-12-09 01:24:00.805517: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: inception version: 1}
2017-12-09 01:24:00.810840: I tensorflow_serving/model_servers/main.cc:288] Running ModelServer at 0.0.0.0:9000 ...
@yokiqust

This comment has been minimized.

Copy link

commented Dec 10, 2017

@jda91 i have same problem,have u solve that?

@jda91

This comment has been minimized.

Copy link

commented Dec 10, 2017

@yokiqust nope :/ it's driving me crazy, I can't find any information online about it either

@visshvesh

This comment has been minimized.

Copy link

commented Dec 14, 2017

@gauravkaila
Thanks but still getting the same issue...
I ll try pulling the image and using it, Thanks

@shohkhan

This comment has been minimized.

Copy link

commented Dec 14, 2017

My error message contained the following line:
external/gemmlowp/public/../internal/../internal/kernel_default.h:88:2: error: #error "SIMD not enabled, you'd be getting a slow software fallback. Consider enabling SIMD extensions (for example using -msse4 if you're on modern x86). If that's not an option, and you would like to continue with the slow fallback, define GEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK."

Using the following options in the command have worked for me:
--copt=-msse4.1 --copt=-msse4.2

@Howie-hxu

This comment has been minimized.

Copy link

commented Dec 15, 2017

I have met the same problem.
Also I have tried to use
bazel build -c opt --copt=-msse4.1 --copt=-msse4.2 tensorflow/python/tools:freeze_graph
it works fine~,

@jda91

This comment has been minimized.

Copy link

commented Dec 15, 2017

As I said, I tried that, multiple times on multiple computers on various operating systems, I get the same exact error every time. I can't export or run any python script, even if I natively install pip tensorflow serving, the same error occurs. I tried in the docker container, I tried without it, same error every time.

@nlopezgi

This comment has been minimized.

Copy link
Contributor

commented Dec 18, 2017

I was having this same issue in a new container I created to build tensorflow and run some tests. Error went away after I remembered to run the ./configure script at the root of the project.

@ghost

This comment has been minimized.

Copy link
Author

commented Jan 2, 2018

Close this issue due to lack of activities , try compile commands above in advance , reopen this issue if needed.

@ghost ghost closed this Jan 2, 2018

This issue was closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.