Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TFLite got Operator not supported #133

Closed
ilous12 opened this issue Feb 7, 2019 · 53 comments
Closed

TFLite got Operator not supported #133

ilous12 opened this issue Feb 7, 2019 · 53 comments
Labels
Feature Request New feature or request

Comments

@ilous12
Copy link

ilous12 commented Feb 7, 2019

hi, I tried to apply deeplabv3+.

I built on Deeplab and I got frozen.pb [frozen_inference_graph.pb.zip]

When I converted, I got [deeplab_257_quantized.tflite.zip]

Finally, I ran on android (Samsung Note8, supported OpenCL + NEON) with below code

armnnTfLiteParser::ITfLiteParserPtr parser = armnnTfLiteParser::ITfLiteParser::Create();
armnn::INetworkPtr network = parser->CreateNetworkFromBinaryFile("test.tflite");

// Find the binding points for the input and output nodes                                                                                                                                                                                                                                                
armnnTfLiteParser::BindingPointInfo inputBindingInfo = parser->GetNetworkInputBindingInfo(0, "ImageTensor");
armnnTfLiteParser::BindingPointInfo outputBindingInfo = parser->GetNetworkOutputBindingInfo(0, "SemanticPredictions");
                                                                                                                                                                                                                                                                    
armnn::IRuntime::CreationOptions options; // default options                                                                                                                                                                                                                                             
armnn::IRuntimePtr runtime = armnn::IRuntime::Create(options);

armnn::IOptimizedNetworkPtr optNet = Optimize(*network, {device}, runtime->GetDeviceSpec());
                                                                                                                                                                                                                                    
armnn::NetworkId networkIdentifier;
runtime->LoadNetwork(networkIdentifier, std::move(optNet));

auto input = new float[1*257*257*3];
auto output = new float[257*257];

armnn::InputTensors inputTensor = MakeInputTensors(inputBindingInfo, &input[0]);
armnn::OutputTensors outputTensor = MakeOutputTensors(outputBindingInfo, &output[0]);

runtime->EnqueueWorkload(networkIdentifier, inputTensor, outputTensor);

and I got below error. I understood supported ops. Is that problem?

terminating with uncaught exception of type armnn::ParseException: Failed to parse operator #0 within subgraph #0 error: Operator not supported. subgraph:0 operator:0 opcode_index:7 opcode:53 / CAST at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #1 within subgraph #0 error: inputs.size() = 2 is not valid, not in {1}. at function ParseReshape [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:1068]
Failed to parse operator #2 within subgraph #0 error: Operator not supported. subgraph:0 operator:2 opcode_index:6 opcode:41 / SUB at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #3 within subgraph #0 error: Operator not supported. subgraph:0 operator:3 opcode_index:11 opcode:34 / PAD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #4 within subgraph #0 error: Operator not supported. subgraph:0 operator:4 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #5 within subgraph #0 error: Operator not supported. subgraph:0 operator:5 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #6 within subgraph #0 error: Operator not supported. subgraph:0 operator:6 opcode_index:6 opcode:41 / SUB at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #7 within subgraph #0 error: inputs.size() = 2 is not valid, not in {1}. at function ParseReshape [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:1068]
Failed to parse operator #17 within subgraph #0 error: Operator not supported. subgraph:0 operator:17 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #24 within subgraph #0 error: Operator not supported. subgraph:0 operator:24 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #28 within subgraph #0 error: Operator not supported. subgraph:0 operator:28 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #33 within subgraph #0 error: Operator not supported. subgraph:0 operator:33 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #35 within subgraph #0 error: Operator not supported. subgraph:0 operator:35 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #36 within subgraph #0 error: Operator not supported. subgraph:0 operator:36 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #37 within subgraph #0 error: Operator not supported. subgraph:0 operator:37 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #39 within subgraph #0 error: Operator not supported. subgraph:0 operator:39 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #41 within subgraph #0 error: Operator not supported. subgraph:0 operator:41 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #43 within subgraph #0 error: Operator not supported. subgraph:0 operator:43 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #44 within subgraph #0 error: Operator not supported. subgraph:0 operator:44 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #45 within subgraph #0 error: Operator not supported. subgraph:0 operator:45 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #47 within subgraph #0 error: Operator not supported. subgraph:0 operator:47 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #49 within subgraph #0 error: Operator not supported. subgraph:0 operator:49 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #51 within subgraph #0 error: Operator not supported. subgraph:0 operator:51 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #52 within subgraph #0 error: Operator not supported. subgraph:0 operator:52 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #53 within subgraph #0 error: Operator not supported. subgraph:0 operator:53 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #55 within subgraph #0 error: Operator not supported. subgraph:0 operator:55 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #57 within subgraph #0 error: Operator not supported. subgraph:0 operator:57 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #59 within subgraph #0 error: Operator not supported. subgraph:0 operator:59 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #60 within subgraph #0 error: Operator not supported. subgraph:0 operator:60 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #61 within subgraph #0 error: Operator not supported. subgraph:0 operator:61 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #64 within subgraph #0 error: Operator not supported. subgraph:0 operator:64 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #66 within subgraph #0 error: Operator not supported. subgraph:0 operator:66 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #67 within subgraph #0 error: Operator not supported. subgraph:0 operator:67 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #68 within subgraph #0 error: Operator not supported. subgraph:0 operator:68 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #70 within subgraph #0 error: Operator not supported. subgraph:0 operator:70 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #72 within subgraph #0 error: Operator not supported. subgraph:0 operator:72 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #74 within subgraph #0 error: Operator not supported. subgraph:0 operator:74 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #75 within subgraph #0 error: Operator not supported. subgraph:0 operator:75 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #76 within subgraph #0 error: Operator not supported. subgraph:0 operator:76 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #78 within subgraph #0 error: Operator not supported. subgraph:0 operator:78 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #80 within subgraph #0 error: Operator not supported. subgraph:0 operator:80 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #82 within subgraph #0 error: Operator not supported. subgraph:0 operator:82 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #83 within subgraph #0 error: Operator not supported. subgraph:0 operator:83 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #84 within subgraph #0 error: Operator not supported. subgraph:0 operator:84 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #87 within subgraph #0 error: Operator not supported. subgraph:0 operator:87 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #89 within subgraph #0 error: Operator not supported. subgraph:0 operator:89 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #90 within subgraph #0 error: Operator not supported. subgraph:0 operator:90 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #91 within subgraph #0 error: Operator not supported. subgraph:0 operator:91 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #93 within subgraph #0 error: Operator not supported. subgraph:0 operator:93 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #95 within subgraph #0 error: Operator not supported. subgraph:0 operator:95 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #97 within subgraph #0 error: Operator not supported. subgraph:0 operator:97 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #98 within subgraph #0 error: Operator not supported. subgraph:0 operator:98 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #99 within subgraph #0 error: Operator not supported. subgraph:0 operator:99 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #101 within subgraph #0 error: Operator not supported. subgraph:0 operator:101 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #103 within subgraph #0 error: Operator not supported. subgraph:0 operator:103 opcode_index:9 opcode:38 / SPACE_TO_BATCH_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #105 within subgraph #0 error: Operator not supported. subgraph:0 operator:105 opcode_index:10 opcode:37 / BATCH_TO_SPACE_ND at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #106 within subgraph #0 error: Operator not supported. subgraph:0 operator:106 opcode_index:5 opcode:18 / MUL at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #107 within subgraph #0 error: Operator not supported. subgraph:0 operator:107 opcode_index:0 opcode:0 / ADD at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #111 within subgraph #0 error: Operator not supported. subgraph:0 operator:111 opcode_index:8 opcode:23 / RESIZE_BILINEAR at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #116 within subgraph #0 error: Operator not supported. subgraph:0 operator:116 opcode_index:8 opcode:23 / RESIZE_BILINEAR at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #117 within subgraph #0 error: Operator not supported. subgraph:0 operator:117 opcode_index:8 opcode:23 / RESIZE_BILINEAR at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #118 within subgraph #0 error: Operator not supported. subgraph:0 operator:118 opcode_index:12 opcode:56 / ARG_MAX at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #119 within subgraph #0 error: Operator not supported. subgraph:0 operator:119 opcode_index:7 opcode:53 / CAST at function ParseUnsupportedOperator [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:624]
Failed to parse operator #120 within subgraph #0 error: inputs.size() = 2 is not valid, not in {1}. at function ParseReshape [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:1068]

@MatthewARM
Copy link
Collaborator

Hi @ilous12 this is really good work, thank you. I think this is the first time someone has tried to run deeplabv3+ via TfLite on ArmNN, and actually the missing functionalilty is not that bad:

MUL, ADD, RESIZE_BILINEAR, BATCH_TO_SPACE_ND, SPACE_TO_BATCH_ND and SUB are all supported by ArmNN, and just need adding to the TfLite parser. @brunomorishita actually added some of these recently, and his code is available in the development branch at https://review.mlplatform.org/#/admin/projects/ml/armnn

@MatthewARM MatthewARM added the Feature Request New feature or request label Feb 7, 2019
@MatthewARM
Copy link
Collaborator

ARG_MAX recently got added to Compute Library but hasn't been integrated into ArmNN yet.

CAST is a bit harder, it depends what is actually happening in the model. We don't have this functionality in ArmNN yet, and it might not be present in Compute Library either.

Then we have the ParseReshape issue - I think this is a limitation just of the TfLite parser, as ArmNN can now handle all kinds of reshapes.

So some of this is "easy" to fix in ArmNN, some is harder.

deeplab v3 is a network that we are trying to support, but I can't confirm a timeline for it. Are you in a position to help add support?

Many thanks,
Matthew

@ilous12
Copy link
Author

ilous12 commented Feb 7, 2019

@MatthewARM
I understood a current status.
Unfortunately, I am deep learning new but If you need little help, try on

@MatthewARM
Copy link
Collaborator

Hi @ilous12 if you are willing to help, a really good first step would be to get the latest master Arm NN and Compute Library and try your test again, that will give us a good picture of the remaining work.

The links to the master branches can be found on our new developer website here: https://mlplatform.org/contributing/

@ilous12
Copy link
Author

ilous12 commented Feb 7, 2019

ok. I will try tomorrow.

@ilous12
Copy link
Author

ilous12 commented Feb 8, 2019

Hi @MatthewARM I got a result. see below.

./TFLite 1
Optimisation mode: CpuAcc
terminating with uncaught exception of type armnn::ParseException: Buffer #89 has 0 bytes. For tensor: [1,33,33,256] expecting: 1115136 bytes and 278784 elements. at function CreateConstTensor [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:1808]
Aborted

134|alphaplus:/data/local/tmp $ ./TFLite 2
Optimisation mode: GpuAcc
terminating with uncaught exception of type armnn::ParseException: Buffer #89 has 0 bytes. For tensor: [1,33,33,256] expecting: 1115136 bytes and 278784 elements. at function CreateConstTensor [/Users/ilous12/armnn-devenv/armnn/src/armnnTfLiteParser/TfLiteParser.cpp:1808]
Aborted

my tflite file was below
test.tflite.zip

input/output

input node is sub_7 and output node is ResizeBilinear_3

my code
TFLite.cpp.zip

what's #89. Is there verbose mode?

@ilous12
Copy link
Author

ilous12 commented Feb 12, 2019

Hi, @MatthewARM

did you have any update related tensorflowlite lite?

@MatthewARM
Copy link
Collaborator

MatthewARM commented Feb 14, 2019

Thanks @ilous12 we'll try those steps and see. Buffer #89 will be one of the intermediate values in the network, and for some reason our Tensorflow Lite parser can't handle it. We'll take a look,

@brunomorishita
Copy link
Contributor

brunomorishita commented Feb 18, 2019

Hi @ilous12 ,

Recently I push some commits adding support for some of the operations in deeplab v3 model for tflite.
It should work now with this model:
deeplabv3_257_mv_gpu

The operations I pushed were not merged in the development branch yet, so you'll have to get my patches. They are available at:
https://review.mlplatform.org/#/q/status:open

Please let me know if this works for you.

@ilous12
Copy link
Author

ilous12 commented Feb 18, 2019

thanks @brunomorishita

How can I get your patches? can you guide?

@brunomorishita
Copy link
Contributor

brunomorishita commented Feb 18, 2019

thanks @brunomorishita

How can I get your patches? can you guide?

git fetch https://review.mlplatform.org/ml/armnn refs/changes/04/704/1 && git checkout FETCH_HEAD

@ilous12
Copy link
Author

ilous12 commented Feb 18, 2019

I downloaded your patches
can you share your example for deeplab?
I want to compare your code and my test code.

@ilous12
Copy link
Author

ilous12 commented Feb 18, 2019

Thanks. guys. you did.
finally it work. I will implement sample on android and check for semantic label.

@ilous12
Copy link
Author

ilous12 commented Feb 20, 2019

Hi. guys.
I tried to run deeplab. you see below.

Thanks @brunomorishita I tried next steps and I saw invalid semantic label
it work on tensorflow lite with deeplabv3_257_mv_gpu.tflite but I think armnn got problem.

@MatthewARM
Copy link
Collaborator

Hi @ilous12 where you have " engineConfig_->device_ = armnn::Compute::GpuAcc;" it's probably worth a quick try with CpuRef to see if the output from our reference (non-accelerated) implementation is different. That will help track down the problem.

@ilous12
Copy link
Author

ilous12 commented Feb 21, 2019

Unfortunately It didn't work. did you check my code have no problem?
CpuRef : Invalid
CpuAcc : Invalid
GpuAcc : Invalid

@ilous12
Copy link
Author

ilous12 commented Feb 21, 2019

I thought my code looks fine, But I want to check that most likely something is going wrong with handling the input buffer or output.

src.zip

@MatthewARM
Copy link
Collaborator

MatthewARM commented Feb 21, 2019

At armnnwrapper.cc_:143 you have memcpy(engineConfig_->input_, input, 1*257*257*21); which seems strange? Shouldn't that be 1*257*257*3*sizeof(float) to match the definition of ArmnnEngineConfig::input_ at line 49?

@ilous12
Copy link
Author

ilous12 commented Feb 21, 2019

You were right. I fixed. I try again.

@ilous12
Copy link
Author

ilous12 commented Feb 21, 2019

We tried test but not enough.
We will test armnn and tensorflow lite then compare the results.
I expected same result. is it right?

our code src.zip

<rgb input file: read, 1x257x257x3> - <armnn> - < 1x257x257x21, label output array>

@MatthewARM
Copy link
Collaborator

MatthewARM commented Feb 21, 2019

@ilous12 might not be exactly the same, if ArmNN does rounding differently to TfLite, but should be very close.

@MatthewARM
Copy link
Collaborator

Hi @ilous12 I can't see anything else obviously wrong with the code so I don't know why you would get the wrong result.

The only thing I can suggest is that in the latest master branch (at https://review.mlplatform.org/#/admin/projects/ml/armnn) we have added a 'debug' flag to OptimizerOptions. Setting this flag will make ArmNN print the contents of all tensors to standard output, so that you can compare with similar debug output from Tensorflow Lite and figure out where it is going wrong inside the network.

Actually, if you haven't tried already, you should try with the latest master code anyway, just in case this is a bug that we've fixed without realising it.

Good luck,
Matthew

@MatthewARM MatthewARM mentioned this issue Feb 26, 2019
@MatthewARM
Copy link
Collaborator

@ilous12 this is a floating-point model, isn't it? Not quantised?

I'm just trying to figure out what could be going wrong.

@MatthewARM
Copy link
Collaborator

Hi @oms1226 we should be getting something very close to Tensorflow Lite's answer - just small differences sometimes due to different arithmetic implementations.

As you are seeing something very different, it looks like there is perhaps a bug. If you can figure out which layer in the network is producing the wrong output, that would be very helpful.

Many thanks,
Matthew

@oms1226
Copy link

oms1226 commented Mar 1, 2019

Thank @MatthewARM.
I want to compare with similar debug output from Tensorflow Lite and figure out where it is going wrong inside the network.
So, m_Debug's default value changed false to true in .../include/armnn/INetwork.hpp as like below.
OptimizerOptions()
: m_ReduceFp32ToFp16(false)
, m_Debug(true)
{}
How can I figure out debug's log and so on?

@MatthewARM
Copy link
Collaborator

Hi @oms1226 when you run with m_Debug set to true, it will print all tensor values on standard output, so you can capture them in a file for debugging.

By the way, changing the default value is overkill - it would be more usual to set it in your application with something like:

armnn::OptimizerOptions options;
options.m_Debug = true;
engineConfig_->optNet_ = Optimize(*(engineConfig_->network_), {engineConfig_->device_}, engineConfig_->runtime_->GetDeviceSpec(), options);

@oms1226
Copy link

oms1226 commented Mar 1, 2019

I feel so sorry for @MatthewARM .
I can't find out all tensor values on standard output.
I think standard output is adb log on android device.
As your advice, I let m_Debug true as below.
I wonder what's wrong?
And let me know print example for all tensor value. I am turning to you for help.

engineConfig_->inputBindingInfo_ = engineConfig_->parser_->GetNetworkInputBindingInfo(0, "sub_7");
engineConfig_->outputBindingInfo_ = engineConfig_->parser_->GetNetworkOutputBindingInfo(0, "ResizeBilinear_3");
armnn::IRuntime::CreationOptions options; // default options
engineConfig_->options_ = options;
engineConfig_->runtime_ = armnn::IRuntime::Create(engineConfig_->options_);
engineConfig_->device_ = armnn::Compute::GpuAcc;
armnn::OptimizerOptions op_options;
op_options.m_Debug = true;
engineConfig_->optNet_ = Optimize(*(engineConfig_->network_), {engineConfig_->device_}, engineConfig_->runtime_->GetDeviceSpec(), op_options);
armnn::NetworkId networkIdentifier = 0;
engineConfig_->runtime_->LoadNetwork(networkIdentifier, std::move(engineConfig_->optNet_));
engineConfig_->networkIdentifier_ = networkIdentifier;    
engineConfig_->input_size_ = input_size;
engineConfig_->output_size_ = output_size;
engineConfig_->input_ = new float[input_size];
engineConfig_->output_ = new float[output_size];
engineConfig_->inputTensor_ = MakeInputTensors(engineConfig_->inputBindingInfo_, engineConfig_->input_);
engineConfig_->outputTensor_ = MakeOutputTensors(engineConfig_->outputBindingInfo_, engineConfig_->output_);

@ilous12
Copy link
Author

ilous12 commented Mar 4, 2019

Hi, @MatthewARM
I uploaded network output file. Can you figure out which layer in the network is producing the wrong output?

https://drive.google.com/open?id=1wJ4RGXJWllWXPj2vyawVDojApB9E3jKF

I'm checking a layer "sub_7", I saw ArmNN does rounding differently to TfLite, but should be very close.

On Tensorflow Lite
[ 0.9921875, 0.9921875, 0.9921875]

On Armnn
[0.992188, 0.992188, 0.992188]

Is it right?

@ilous12
Copy link
Author

ilous12 commented Mar 11, 2019

Hi, @MatthewARM @brunomorishita
Please let us know the progress of the work.

@ilous12
Copy link
Author

ilous12 commented Mar 19, 2019

Hi @MatthewARM
If you need help about this issue, contact us

@KevinARM
Copy link
Collaborator

Hello @ilous12 ,
I am a colleague of @MatthewARM and have picked up a ticket which looks related to this issue. Have you now had success with perfect segmentation running the model on Armnn?

@MatthewARM
Copy link
Collaborator

@ilous12 if the debug flag isn't working on Android, can you at least print the numbers coming out of your network and compare them to TfLite? At the moment I can't figure out if what we're seeing is a bug in ArmNN or some sort of numerical precision issue.

@MatthewARM
Copy link
Collaborator

By the way, the debug flag eventually causes the code to get called in src/backends/reference/workloads/Debug.cpp which uses std::cout. If std::cout isn;t working, maybe you could hack in something that works. I'm sorry I don't really know much about what debug / printing features are available from an Android application.

@ilous12
Copy link
Author

ilous12 commented Mar 20, 2019

Hi @kevmay01 @MatthewARM
Unfortunately, currently tensorflow-lite only supports to print output for input node and output node, We are looking for a way. If let us know all network output on tensorflow-lite.

@ilous12
Copy link
Author

ilous12 commented Mar 20, 2019

@MatthewARM
If we write an armnn app and tensorflow app with console output for deeplab, can you check it?

@ilous12
Copy link
Author

ilous12 commented Mar 21, 2019

Hi. @MatthewARM @kevmay01
We checked armnn and TfLite then compare the results. You will download the link below.
https://drive.google.com/open?id=1VXsSHYulFutt6uKorgWaQfX_lS43SgrV

  • I did not understand why the network numbers of armnn and TfLite were different
  • The output of most networks look different
  • Input node "sub_7" was rounding differently to TfLite, but It were very close.
  • TFlite had a MobilenetV2/expanded/conv/N/project/BatchNorm/FusedBatchNorm.

@KevinARM
Copy link
Collaborator

Hello @ilous12
The problem appears to be that the offical Tensorflow Lite model deeplabv3_257_mv_gpu.tflite includes convolution layers with dilation values of 2 and 4 which Arm NN does not support. The current Arm NN master falls back to using a dilation value of 1 and this would appear to be the cause of the drop in accuracy that you are seeing.

However if you train and convert a model that only uses dilation values of 1 you should see the same results between Arm NN and Tensorflow Lite. I converted your frozen_inference_graph.pb which you attached here to a tflite file and was able to get the same results running on Arm NN and Tf Lite.

@ilous12
Copy link
Author

ilous12 commented Mar 26, 2019

Thanks for reply. I understood your comment. I will try to train. after I will check, I share you results

@ilous12
Copy link
Author

ilous12 commented Mar 27, 2019

Hi @kevmay01
What did you change for the dilation values to 1? Did you use tflite_convert? Can you share me?

@KevinARM
Copy link
Collaborator

I converted the file you attached to this ticket: frozen_inference_graph.pb
I removed the preprocessing layers and some of the last layers.
This creates a file which I think is not what you are ultimately looking for but I was able to use it to prove the output from TfLite and Armnn were the same.

tflite_convert
--graph_def_file=/home/kevmay01/Downloads/frozen_inference_graph.pb
--output_file=/home/kevmay01/test_new.tflite
--input_shapes=1,129,129,3
--input_arrays=sub_7
--output_arrays=ResizeBilinear_2
--inference_type=FLOAT
--inference_input_type=FLOAT
--std_dev_values=128
--mean_value=128

You will need to train a new model I think and figure out how to convert and optimize it to make a similar model to the official deeplabv3_257_mv_gpu.tflite file, but only using default dilation values.

@KevinARM
Copy link
Collaborator

@ilous12 I think the information you need regarding creating a model with dilation set to 1 is in this ticket: tensorflow/tensorflow#26474

There is also an example attached to that ticket of an identical deeplabv3_257_mv_gpu.tflite, but with dilation values set to 1.

@ilous12
Copy link
Author

ilous12 commented Apr 2, 2019

Hi @kevmay01 . Do you have a shchdule to work convolution layers with dilation values of 2 and 4? Or not?

@ilous12
Copy link
Author

ilous12 commented Apr 3, 2019

Hi @MatthewARM @kevmay01
I created a model with dilation value to 1(by output_stride=32) and I saw my shape. nice work.

trained_deeplab_model.zip

But I got some problems. You will see below.

Tensorflow Lite Result
tflite_result

Armnn Result
armnn_result

Composite image (tflite + armnn)
tflite_merge

I think the shape of the result is a little different, can you check?

@MatthewARM
Copy link
Collaborator

Thanks @ilous12 I see what you mean about the output being different. I'll try to get someone to look at whether this indicates a bug but I'm not sure whether that will be this week or later.

Have you tried with the Arm NN CpuRef backend? That doesn't use Compute Library so it eliminates one possible source of errors.

@ilous12
Copy link
Author

ilous12 commented Apr 4, 2019

Hi @MatthewARM I got a result and I will se belows

CpuRef
armnn_cpuref

Gpu
amrnn_gpu

We have a crash issue to run cpuAcc on Release 19.02. You will se belows

com.test.sample#00 pc 00000000004c184c /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so (arm_compute::NEScaleKernel::scale_nhwc(arm_compute::Window const&)+792) <---- This point
com.test.sample#01 pc 000000000032fa28 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so
com.test.sample#02 pc 000000000032f4f8 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so (arm_compute::CPPScheduler::run_workloads(std::__ndk1::vector<std::__ndk1::function<void (arm_compute::ThreadInfo const&)>, std::__ndk1::allocatorstd::__ndk1::allocator>&)+220)
com.test.sample#03 pc 000000000032f7c8 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so (arm_compute::CPPScheduler::schedule(arm_compute::ICPPKernel*, arm_compute::IScheduler::Hints const&)+348)
com.test.sample#04 pc 000000000037a4f4 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so (arm_compute::NEScale::run()+84)
com.test.sample#05 pc 00000000002b9d00 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so (armnn::NeonResizeBilinearWorkload::Execute() const+304)
com.test.sample#06 pc 00000000002389f8 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so (armnn::LoadedNetwork::Execute()+140)
com.test.sample#07 pc 0000000000237a70 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so (armnn::LoadedNetwork::EnqueueWorkload(std::__ndk1::vector<std::__ndk1::pair<int, armnn::ConstTensor>, std::__ndk1::allocator<std::__ndk1::pair<int, armnn::ConstTensor>>> const&, std::__ndk1::vector<std::__ndk1::pair<int, armnn::Tensor>, std::__ndk1::allocator<std::__ndk1::pair<int, armnn::Tensor>>> const&)+2380)
com.test.sample#08 pc 0000000000258d1c /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn.so (armnn::Runtime::EnqueueWorkload(int, std::__ndk1::vector<std::__ndk1::pair<int, armnn::ConstTensor>, std::__ndk1::allocator<std::__ndk1::pair<int, armnn::ConstTensor>>> const&, std::__ndk1::vector<std::__ndk1::pair<int, armnn::Tensor>, std::__ndk1::allocator<std::__ndk1::pair<int, armnn::Tensor>>> const&)+408)
com.test.sample#09 pc 0000000000003c00 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn_mobile_jni.so (armnn::ArmnnEngineContext::inference(float*)+64)
com.test.sample#10 pc 0000000000002bc8 /data/app/com.test.sample-IsGLQ5XgkmORLc4rtHSI7w==/lib/arm64/libarmnn_mobile_jni.so (Java_com_skt_tnn_ArmnnNativeWrapper_inference+484)

@ilous12
Copy link
Author

ilous12 commented Apr 8, 2019

Hi @MatthewARM

We measured elapsed times and I noticed delayed API. You will see belows.

parser_  = armnnTfLiteParser::ITfLiteParser::Create();  0ms
parser_->CreateNetworkFromBinaryFile(); 73ms
parser_->GetNetworkInputBindingInfo(); 0ms
optNet_ = Optimize(); 28ms
runtime_->LoadNetwork(); 2270ms <------ so slow

Can you check this problem?

@ilous12
Copy link
Author

ilous12 commented Apr 8, 2019

Hi @MatthewARM

When I use "optimizerOptions.m_Debug = true" with GpuAcc on 19.02, I got a crash.

armnn/src/armnn/LoadedNetwork.cpp:192: const armnn::IWorkloadFactory &armnn::LoadedNetwork::GetWorkloadFactory(const armnn::Layer &) const: assertion "(IWorkloadFactory::IsLayerSupported(layer, {}, reasonIfUnsupported))&&("Factory does not support layer")" failed
Aborted (core dumped)

Thanks.

@ilous12
Copy link
Author

ilous12 commented Apr 9, 2019

Hi @MatthewARM
we found wrong result using op "ResizeBilinear_2" in armnn for upscale

input node output status
1x9x9x320 AvgPool2D 1x1x1x320 same
1x1x1x320 ResizeBilinear 1x9x9x256 same
1x9x9x21 ResizeBilinear_1 1x9x9x21 same
1x9x9x21 ResizeBilinear_2 1x257x257x21 wrong

Can you check "Up Scale" function by resizeBilinear op?

We attached output files.
resizeBilinear.zip

@MatthewARM
Copy link
Collaborator

Hi @ilous12 sorry for the long delay in replying. The only question I know how to answer is this one:

We measured elapsed times and I noticed delayed API. You will see belows.

parser_  = armnnTfLiteParser::ITfLiteParser::Create();  0ms
parser_->CreateNetworkFromBinaryFile(); 73ms
parser_->GetNetworkInputBindingInfo(); 0ms
optNet_ = Optimize(); 28ms
runtime_->LoadNetwork(); 2270ms <------ so slow

Can you check this problem?

I expect that the 2.3 seconds here is spent compiling OpenCL kernels for the GpuAcc backend. We're looking into various ways to cache those kernels so that it doesn't have to be done every time you run your program, but it's a hard problem.

I'm sorry but I don't yet have an answer for your other problems.

@MatthewARM
Copy link
Collaborator

Hi @ilous12 the support for dilation in DepthwiseConvolution has been merged to master so hopefully your original model will now work!

On the 'upscale' issue, do you happen to know which resize method is in use? Should be one of:
BILINEAR = 0
NEAREST_NEIGHBOR = 1
BICUBIC = 2
AREA = 3

Many thanks,
Matthew

@MatthewARM
Copy link
Collaborator

Many thanks to @brunomorishita for contributing the dilation support

@MatthewARM
Copy link
Collaborator

Hi @ilous12 we have just fixed the crash with NEScale in this patch https://review.mlplatform.org/#/c/ml/ComputeLibrary/+/1141/ which is now on the master branch.

@ilous12
Copy link
Author

ilous12 commented May 17, 2019

thanks guys. We will check soon.

@ilous12 ilous12 closed this as completed Aug 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature Request New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants