Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tflite][android]deeplab-v3+ runtime error on Pad Ops #21266

Closed
kismeter opened this issue Jul 31, 2018 · 7 comments
Closed

[tflite][android]deeplab-v3+ runtime error on Pad Ops #21266

kismeter opened this issue Jul 31, 2018 · 7 comments
Assignees
Labels
comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:feature Feature requests

Comments

@kismeter
Copy link

kismeter commented Jul 31, 2018

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):NO
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS 10.13.6
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: BlackBerry KEY2
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): master branch last commit is 78f5862
  • Python version:3.6.3
  • Bazel version (if compiling from source):0.15.2
  • GCC/Compiler version (if compiling from source): NDK r17 toolchain
  • CUDA/cuDNN version:NA
  • GPU model and memory:NA
  • Exact command to reproduce:

Describe the problem

I download the pre-trained modals with MobileNet-v2 from mobilenetv2_coco_voc_trainaug. covered model to tflite then load into android application, I see internal error at OP Pad on prepare.

Source code / logs

using below command to cover the model without any errors:

bazel run //tensorflow/contrib/lite/toco:toco -- \
  --input_file=/tmp/frozen_inference_graph.pb \
  --output_file=/tmp/optimized_graph.tflite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --input_type=QUANTIZED_UINT8 \
  --input_arrays=ImageTensor \
  --output_arrays=SemanticPredictions \
  --input_shapes=1,513,513,3

below command to build tensorflow-lite.aar

bazel build --cxxopt='--std=c++11' -c opt        \
  --fat_apk_cpu=x86,x86_64,arm64-v8a,armeabi-v7a   \
  //tensorflow/contrib/lite/java:tensorflow-lite

Then I load optimized_graph.tflite and tensorflow-lite.aar into Android application project

  private static final int DIM_PIXEL_SIZE = 3;
  static final int DIM_IMG_SIZE_X = 513;
  static final int DIM_IMG_SIZE_Y = 513;

    tflite = new Interpreter(loadModelFile(activity));
    imgData =
        ByteBuffer.allocateDirect(
            DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
    imgData.order(ByteOrder.nativeOrder());
    outputs = new int[DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y];
  /** Memory-map the model file in Assets. */
  private MappedByteBuffer loadModelFile(Activity activity) throws IOException {
    AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(MODEL_PATH);
    FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
    FileChannel fileChannel = inputStream.getChannel();
    long startOffset = fileDescriptor.getStartOffset();
    long declaredLength = fileDescriptor.getDeclaredLength();
    return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
  }

run Interpreter

tflite.run(imgData, outputs);

Error Logs:

07-31 16:20:36.144 25819-25974/android.example.com.tflitecamerademo E/AndroidRuntime: FATAL EXCEPTION: CameraBackground
    Process: android.example.com.tflitecamerademo, PID: 25819
    java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)Node number 24 (PAD) failed to prepare.
    
        at org.tensorflow.lite.NativeInterpreterWrapper.run(Native Method)
        at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:130)
        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:168)
        at org.tensorflow.lite.Interpreter.run(Interpreter.java:145)
@andrehentz
Copy link
Contributor

We will investigate. Meanwhile, could you make sure your input tensors are 4D? Are you calling resizeInput()?

@andrehentz andrehentz added the comp:lite TF Lite related issues label Jul 31, 2018
@kismeter
Copy link
Author

kismeter commented Aug 1, 2018

@andrehentz I use pre-trained model from model_zoo https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md, I think it's 3-D tensor with shape [height, width, channels] for PAD.
I noticed from Pad.cc there's a TODO

  //TODO(nupurgarg): Our current implementations rely on the inputs being 4D.
  TF_LITE_ENSURE_EQ(context, op_context.dims, 4);

seems Pad not support 3D tensor right now.

@andrehentz andrehentz added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Aug 1, 2018
@andrehentz
Copy link
Contributor

Thanks @kismeter We are tracking this and will support 3D (and other) soon.

@andrehentz andrehentz assigned gargn and unassigned achowdhery Aug 3, 2018
@andrehentz andrehentz added the type:feature Feature requests label Aug 3, 2018
@tensorflowbutler
Copy link
Member

Nagging Assignee @gargn: It has been 29 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

@gashmish
Copy link

gashmish commented Sep 7, 2018

Hello! I see there are still some ops (SPACE_TO_BATCH_ND, SPACE_TO_BATCH_ND) which support only 4D tensors:

tensorflow/contrib/lite/kernels/space_to_batch_nd.cc:96 NumDimensions(op_context.input) != kInputDimensionNum (3 != 4)

3D support for this ops is highly required! Are you planning to implement it in near future? Should i create feature request?

@gargn
Copy link

gargn commented Sep 7, 2018

Just for a clarification, do you need BatchToSpace non-4D and SpaceToBatch non-4D, or just BatchToSpace non-4D?

BatchToSpace non-4D hasn't been prioritized yet, contributions are welcome, but we are also tracking these operations requests here: #21526 to help with prioritizing. For that reason, moving forward we are closing individual issues for operation requests. Feel free to file an issue if you have more info on the specific model you are trying to convert. Thanks!

@gashmish
Copy link

gashmish commented Sep 7, 2018

In model im working on both BatchToSpace and SpaceToBatch are required, you can find some information in this issue #22146 about TOCO converter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:feature Feature requests
Projects
None yet
Development

No branches or pull requests

6 participants