Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build an android tensorflow binary with support DT_BOOL #3

Open
DumDumin opened this issue Jul 6, 2017 · 10 comments
Open

Build an android tensorflow binary with support DT_BOOL #3

DumDumin opened this issue Jul 6, 2017 · 10 comments

Comments

@DumDumin
Copy link

DumDumin commented Jul 6, 2017

Hey guys,

I have the following error by running a network on android:

Unhandled Exception:TensorFlow.TFException: No OpKernel was registered to support Op 'Switch' with these attrs. Registered devices: [CPU], Registered kernels:device='GPU'; T in [DT_STRING]device='GPU'; T in [DT_BOOL]device='GPU'; T in [DT_INT32]device='GPU'; T in [DT_FLOAT]device='CPU'; T in [DT_FLOAT]device='CPU'; T in [DT_INT32][[Node: model/comb1/comb1/cond/Switch = Switch[T=DT_BOOL](ph/training_ph, ph/training_ph)]] occurred

It looks like the type bool is not support on the cpu version of tensorflow.
I found this fix: tensorflow/models#1740
but wasnt able to manage it....

@gmlwns2000
Copy link
Owner

Did you build own native library?

@DumDumin
Copy link
Author

DumDumin commented Jul 7, 2017

No not yet... I tried to use the gpu version provided by miguel. But there is something else going wrong..

Can anyone provide the built native library for bool support? I suppose I am not the only one having trouble with that. Would spare me a lot of time getting started with building the c-lib myself.

Thanks for any help :)

@gmlwns2000
Copy link
Owner

I builted bool supported native lib. Google Drive
It worked fine with my own model (included BatchNorm Layers that have Switch Ops).

And you can find gpu supported binaries in GitIgnoredDatas.zip too. It is the zip file that I uploaded.
You just include into TensorFlowSharp.Windows instead of CPU version of libtensorflow.dll.
Follow files should be included to use gpu on window.

  • /GitIgnoredDatas/windows/gpu/libtensorflow.dll
  • /GitIgnoredDatas/windows/libs/*.dll

If you want to get dll yourself, copy _pywrap_tensorflow_internal.pyd from python package. And rename .pyd as libtensorflow.dll. Also you need dependency dlls too.

@DumDumin
Copy link
Author

DumDumin commented Jul 7, 2017

Thanks man! Your new built native library is working for me :)

@gmlwns2000
Copy link
Owner

Great!

@hgffly
Copy link

hgffly commented Aug 2, 2017

Hi @gmlwns2000 , I have tried your built libraries and it's working for me. And I'm pretty interested how to build libraries like that as well. May you share how to make it ? Thanks

@gmlwns2000
Copy link
Owner

Hi @hgffly ,
This is how to build tensorflow C lib for android. I hope this will help you.

  1. Make tensorflow for android support DT_BOOL.
    Edit following line #define TF_CALL_bool(m) to #define TF_CALL_bool(m) m(bool)
    Source: stackoverflow
  2. Change build option to appear TF_* functions in libtensorflow_inference.so.
    This file defines which functions should be export. Add a line TF_* in global definition
VERS_1.0 {
  global:
    Java_*;
    JNI_OnLoad;
    JNI_OnUnload;
    TF_*;
  local:
    *;
};
  1. Build a libtensorflow_inference.so. You should build several time to support platforms (arm64-v8a, armeabi-v7a, x86, x86_64)
    Enter the command on the root of tensorflow dir.
bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so \
   --crosstool_top=//external:android/crosstool \
   --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
   --cpu=armeabi-v7a
  • To change target platform, you can just change --cpu=armeabi-v7a.
  • If you ran out of memory while building, add --verbose_failures --local_resources 4096,4.0,1.0 -j 1
  1. So now, you can find tensorflow_inference.so in bazel-bin/tensorflow/contrib/android. Copy libtensorflow_inference.so into other place, go back to step 3, change cpu target, and build it again for several platforms.
  2. After building whole binary for each platform, rename libtensorflow_inference.so to libtensorflow.so. And then copy those files into correct place
    (ex: TensorflowSharp/GitIgnoredData/android/armeabi-v7a/)

My solution is not official and not clear too, if you guys have some better solution for building native library for android, please share how to make it :)
Thanks

@gmlwns2000 gmlwns2000 reopened this Aug 2, 2017
@gmlwns2000 gmlwns2000 changed the title No Bool support on cpu on android Build an android tensorflow binary with support DT_BOOL Aug 2, 2017
@hgffly
Copy link

hgffly commented Aug 3, 2017 via email

@gmlwns2000
Copy link
Owner

Hi, @hgffly ,

I used tensorflow 1.2.0-r0 for build a library.
I'm sorry but I am not good at bazel building, so I can't help you :(

@hgffly
Copy link

hgffly commented Aug 3, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants