Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Android: No OpKernel was registered to support Op 'ExtractImagePatches' #10153

Closed
CHLLHC opened this issue May 24, 2017 · 11 comments
Closed

Android: No OpKernel was registered to support Op 'ExtractImagePatches' #10153

CHLLHC opened this issue May 24, 2017 · 11 comments
Labels
type:support Support issues

Comments

@CHLLHC
Copy link

CHLLHC commented May 24, 2017

Hi, I have read (#9763 =>) #9476 #5764 #8486 #6260 #5921 (#1269 This one is too old to be helpful).

I was trying to use YOLOv2 on Android TensorFlow. I followed the exact same procedure stated in the README that successfully got the tiny-yolo-voc model running on my phones, the only different in the code is the filename. But I got the following error message and the app die.

FATAL EXCEPTION: inference
Process: org.tensorflow.demo, PID: 31154
java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'ExtractImagePatches' with these attrs.  Registered devices: [CPU], Registered kernels: <no registered kernels>
                                                                     
[[Node: ExtractImagePatches = ExtractImagePatches[T=DT_FLOAT, ksizes=[1, 2, 2, 1], padding="VALID", rates=[1, 1, 1, 1], strides=[1, 2, 2, 1]](concat)]]
at org.tensorflow.Session.run(Native Method)
at org.tensorflow.Session.access$100(Session.java:48)
at org.tensorflow.Session$Runner.runHelper(Session.java:295)
at org.tensorflow.Session$Runner.run(Session.java:245)
at org.tensorflow.contrib.android.TensorFlowInferenceInterface.run(TensorFlowInferenceInterface.java:142)
at org.tensorflow.demo.TensorFlowYoloDetector.recognizeImage(TensorFlowYoloDetector.java:165)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:313)
at android.os.Handler.handleCallback(Handler.java:755)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:156)
at android.os.HandlerThread.run(HandlerThread.java:61)

I also tried Yolo9000, it works (actually died because I didnot provide the matching label/name set, but it didnot yield the same error).

  1. I tried optimize_for_inference (https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/amp/), but the new pb file still have "ExtractImagePatches" op, so it didn't work for me.
$ grep "ExtractImagePatches" *
Binary file graph-yolo-voc.pb matches
Binary file opt.pb matches
  1. I generated ops_to_register.h (shown below) and put it in tensorflow/tensorflow/core/framework dir (Invalid argument: No OpKernel was registered to support Op 'Add' with these attrs. #8486), and it didn't work for me.
#ifndef OPS_TO_REGISTER
#define OPS_TO_REGISTER
constexpr inline bool ShouldRegisterOp(const char op[]) {
  return false
     || (strcmp(op, "BiasAdd") == 0)
     || (strcmp(op, "ConcatV2") == 0)
     || (strcmp(op, "Const") == 0)
     || (strcmp(op, "Conv2D") == 0)
     || (strcmp(op, "ExtractImagePatches") == 0)
     || (strcmp(op, "Identity") == 0)
     || (strcmp(op, "MaxPool") == 0)
     || (strcmp(op, "Maximum") == 0)
     || (strcmp(op, "Mul") == 0)
     || (strcmp(op, "NoOp") == 0)
     || (strcmp(op, "Pad") == 0)
     || (strcmp(op, "Placeholder") == 0)
     || (strcmp(op, "RealDiv") == 0)
     || (strcmp(op, "Sub") == 0)
     || (strcmp(op, "_Recv") == 0)
     || (strcmp(op, "_Send") == 0)
  ;
}
#define SHOULD_REGISTER_OP(op) ShouldRegisterOp(op)


    namespace {
      constexpr const char* skip(const char* x) {
        return (*x) ? (*x == ' ' ? skip(x + 1) : x) : x;
      }

      constexpr bool isequal(const char* x, const char* y) {
        return (*skip(x) && *skip(y))
                   ? (*skip(x) == *skip(y) && isequal(skip(x) + 1, skip(y) + 1))
                   : (!*skip(x) && !*skip(y));
      }

      template<int N>
      struct find_in {
        static constexpr bool f(const char* x, const char* const y[N]) {
          return isequal(x, y[0]) || find_in<N - 1>::f(x, y + 1);
        }
      };

      template<>
      struct find_in<0> {
        static constexpr bool f(const char* x, const char* const y[]) {
          return false;
        }
      };
    }  // end namespace
    constexpr const char* kNecessaryOpKernelClasses[] = {
"BiasOp<CPUDevice, float>",
"ConcatV2Op<CPUDevice, float>",
"ConstantOp",
"Conv2DOp<CPUDevice, float>",
"ExtractImagePatchesOp<CPUDevice, float>",
"IdentityOp",
"MaxPoolingOp<CPUDevice, float>",
"BinaryOp<CPUDevice, functor::maximum<float>>",
"BinaryOp<CPUDevice, functor::mul<float>>",
"NoOp",
"PadOp<CPUDevice, float>",
"PlaceholderOp",
"BinaryOp<CPUDevice, functor::div<float>>",
"BinaryOp<CPUDevice, functor::sub<float>>",
"RecvOp",
"SendOp",
};
#define SHOULD_REGISTER_OP_KERNEL(clz) (find_in<sizeof(kNecessaryOpKernelClasses) / sizeof(*kNecessaryOpKernelClasses)>::f(clz, kNecessaryOpKernelClasses))

#define SHOULD_REGISTER_OP_GRADIENT false
#endif

  1. I added tensorflow/core/kernels/extract_image_patches_op.cc to tf_op_files.txt(No OpKernel for DepthwiseConv2dNative #5764), and run bazel like this $ bazel build -c opt //tensorflow/examples/android:tensorflow_demo --copt="-DSELECTIVE_REGISTRATION" --define ANDROID_TYPES=__ANDROID_TYPES_FULL__ (iOS: No OpKernel was registered to support Op 'Less' with these attrs.  #9476), still, didn't work for me.

I am using ubuntu 16.04,
TensorFlow installed from (I think I did not install because I just pip list and tensorflow was not on that list. I just follow https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android , and every thing works for tiny-yolo-voc),
[bazel release 0.4.5],
my phone is running Android 7.0

Thanks,
CHL

@jart
Copy link
Contributor

jart commented May 26, 2017

This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there.

@jart jart closed this as completed May 26, 2017
@jart jart added the type:support Support issues label May 26, 2017
@andrewharp
Copy link
Contributor

@CHLLHC @StarRain-L Both of these ops should be included via tensorflow/core:android_op_registrations_and_gradients, which includes array_ops.cc (ExtractImagePatches) and math_ops.cc (Maximum). Can you provide more details about how you are building your app? Does it work if you use the prebuilt binaries?

And just to note, selective registration should also not be necessary.

@CHLLHC
Copy link
Author

CHLLHC commented Jun 22, 2017

I just put "extract_image_patches_op.cc" in ./tensorflow/tensorflow/core/kernels/BUILD inside the
filegroup( name = "android_core_ops",

@MattMcEachern
Copy link

MattMcEachern commented Oct 26, 2017

@CHLLHC have you found a solution to this issue? I've made similar attempts to get ExtractImagePatches running on android to no avail.

@CHLLHC
Copy link
Author

CHLLHC commented Oct 26, 2017

Hi MME,

I did solve this issue and make it run on my phone.

@MattMcEachern
Copy link

@CHLLHC could you please share how you got it to work?

@CHLLHC
Copy link
Author

CHLLHC commented Oct 27, 2017

I try numerous methods, I have listed all I remember above. But my confession is that I didn't undo all operations from one method before trying another. Although I got it work after adding the line "extract_image_patches_op.cc" in the scope of filegroup( name = "android_core_ops", in the BUILD file under ./tensorflow/tensorflow/core/kernels/, I can't say for sure that --copt="-DSELECTIVE_REGISTRATION" --define ANDROID_TYPES=__ANDROID_TYPES_FULL__, changing ops_to_register.h, nor putting generated ops_to_register.h in tf_op_files.txt is not necessary.

The only thing I am sure is the first method in my original post (optimize_for_inference) is not helping. And I only get my app running on a phone with 4GB of ram.

@matt-deboer
Copy link

matt-deboer commented Nov 22, 2017

using the prebuilt binaries results in the exact error mentioned above (No OpKernel was registered to support Op 'ExtractImagePatches' with these attrs)

However, I was able to get this working with only the following steps:

  1. Modify tensorflow/core/kernels/BUILD, adding extract_image_patches_op.cc and extract_image_patches_op.h to the android_core_ops target

  2. Clean and build with the standard bazel command:

bazel clean && \
    bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so \
    --crosstool_top=//external:android/crosstool \
    --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
    --cpu=armeabi-v7a

Although I'm rather new to bazel, I have the suspicion that I wasted many iterations by not running a bazel clean in between attempts.

@GauthierChan
Copy link

GauthierChan commented Nov 24, 2017

@matt-deboer Thank you for your nice instructions ! Do you mind sharing the .jar you get following your bazel command? I built mine with following your comment but I get this exception when I run my app:

Caused by: java.lang.RuntimeException: Native TF methods not found; check that the correct native libraries are present in the APK.
    at org.tensorflow.contrib.android.TensorFlowInferenceInterface.prepareNativeRuntime(TensorFlowInferenceInterface.java:534)
    at org.tensorflow.contrib.android.TensorFlowInferenceInterface.<init>(TensorFlowInferenceInterface.java:60)

Also do you do anything special with .jar and .so created? I simply add them in my 'libs' and 'libs/armeabi-v7a/' respectively, and add the .jar as a dependency in my build.gradle. Should I do something else ?

@matt-deboer
Copy link

@GauthierChan It looks like your custom .so is not being packaged into the apk.

Putting them in the libs dir (in the structure you've mentioned) should work just fine, but you'll want to add the .so file(s) as a dependency also, similar to this:

compile fileTree(dir: 'libs', include: ['*.jar', '**/*.so'])

@jliamfinnie
Copy link

I had trouble with these instructions for quite a while, I was still getting the 'Native TF methods not found' errors. After a while, I realized that the problem was my Android CPU was not arm-based, but rather x86_64. So, in my case, I used the awesome instructions from @matt-deboer, but replaced 'cpu=armeabi-v7a' with 'cpu=x86_64', and that resolved things for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:support Support issues
Projects
None yet
Development

No branches or pull requests

7 participants