Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Didn't find op for builtin opcode 'SUM' version '1'. #48431

Closed
javierFerreroM opened this issue Apr 9, 2021 · 6 comments
Closed

Didn't find op for builtin opcode 'SUM' version '1'. #48431

javierFerreroM opened this issue Apr 9, 2021 · 6 comments
Assignees
Labels
comp:micro Related to TensorFlow Lite Microcontrollers stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.3 Issues related to TF 2.3 type:bug Bug

Comments

@javierFerreroM
Copy link

@tensorflow/micro

System information

  • Host OS Platform and Distribution: Linux Ubuntu 18.04:
  • TensorFlow installed from binary
  • Tensorflow version : 2.3
  • Target platform: ARM 64

Describe the problem
In the following lines I proceed to describe the steps followed:

  1. Adapt MobileNetv2 model. Loaded for training with Keras API. Take out the last layer by setting include_top = false, and adding a customized last Conv2D layer with the Functional API. This is done in order to apply the convolutional sliding window approach. (Bigger input image, than the images used for training).
  2. Training done with success. Translation of the model from TF to TF Lite and it runs and provides a sensible matrix of results.
  3. Translation of TFLite model to TF Tiny model (model.cc) by using the xxd -i ... command provided in the documentation.
  4. When allocating the model, we encounter the following failure:

libraries ready
STM32 Tensorflow Lite test
Model working
Interpreter working
Didn't find op for builtin opcode 'SUM' version '1'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?

Failed to get registration from op code SUM

Failed starting model allocation.

Allocate working
Type input: 1
Bytes input: 1310720
Size input: 4
Dim input 0: 1
Dim input 1: 512
Dim input 2: 640
Dim input 3: 1

As seen in the message, dims are correct, type input is correct (Float) and everything else seems to be working fine.

Prior to this issue, a similar fault appeared as it requested to have the op code EXP. This one is already available in Tensorflow Git and it has been already implemented. (exp.cc, exp_test.cc, exp.h, AddEpx() included in AllOps file and make file modified).

The same procedure has been handled for the MobileNetV2 with no modifications and it worked fine.

I have two main questions:

  • Why do we need extra operators, if we only added one extra Conv2D layer?
  • Is SUM implemented, and will we have further operators needed?

Please provide the exact sequence of commands/steps when you ran into the problem

Model configuration in python (Tensorflow + Keras API):
model = MobileNetV2(include_top=False,weights=None,input_tensor=Input(shape = (512,640,channels) ,dtype = 'float32'),pooling = None,classes = len(class_labels))
last = Conv2D(filters = 5,kernel_size = 3, padding= 'valid',strides=(1,1),activation='softmax', input_shape = (model.layers[-1].output.shape))(model.layers[-1].output)
model = Model(model.input, last)

Model Conversion from Tensorflow to Tensorflow Lite:
converter = tf.lite.TFLiteConverter.from_keras_model(loaded_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
model_no_quant_tflite = converter.convert()
Model Conversion from Tensorflow Lite to Tensorflow Tiny:
xxd -i model_mobilenet_sliding2.tflite > model.cc

Model loading in Tensorflow Tiny (C++):

namespace{ tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
// This pulls in all the operation implementations we need
tflite::AllOpsResolver resolver;
constexpr int kTensorArenaSize = 4 * 1024 * 1024;
uint8_t tensor_arena[kTensorArenaSize];
uint8_t* img = nullptr;
uint64_t timeImage;
uint64_t timeInvoke;
TfLiteTensor* model_input = nullptr;
TfLiteTensor* model_output = nullptr; }

// Set up logging (modify tensorflow/lite/micro/debug_log.cc) static tflite::MicroErrorReporter micro_error_reporter; error_reporter = &micro_error_reporter;

// Say something to test error reporter error_reporter->Report("STM32 Tensorflow Lite test");

model = ::tflite::GetModel(model_mobilenet_sliding2_tflite); cout << "Model working"<<endl;

tflite::MicroInterpreter interpreter(model, resolver, tensor_arena, kTensorArenaSize, &micro_error_reporter);

cout << "Interpreter working" << endl;

interpreter.AllocateTensors();

cout << "Allocate working" << endl;

model_input = interpreter.input(0);

// Get image from provider.
cout << "Type input: " << model_input->type << endl;
cout << "Bytes input: " << model_input->bytes << endl;
cout << "Size input: " << model_input->dims->size << endl;
cout << "Dim input 0: " << model_input->dims->data[0] << endl;
cout << "Dim input 1: " <<model_input->dims->data[1] << endl;
cout << "Dim input 2: " <<model_input->dims->data[2] << endl;
cout << "Dim input 3: " <<model_input->dims->data[3] << endl;

@javierFerreroM javierFerreroM added the comp:micro Related to TensorFlow Lite Microcontrollers label Apr 9, 2021
@Saduf2019
Copy link
Contributor

@javierFerreroM
Could you please try on tf 2.4.1 or 2.5rc1 and let us know if you still face the issue.

@javierFerreroM
Copy link
Author

Good morning @Saduf2019 ,

I just tested both versions and in both we encounter the same issue. The project is generated out of the hello_world example using the following command:

make -f tensorflow/lite/micro/tools/make/Makefile TARGET_ARCH=x86_64 generate_hello_world_make_project

I used the same model.cc file for the three TF Tiny versions ( I am assuming that the model is not the issue, but the missing libraries).

An interesting point I'd like to highlight is that I have checked that in version 2.5rc1, exp.cc is already present in the kernels folder, and it is already integrated in the make file. The only missing part is the AddExp(); that I added by hand to the All_Ops_resolver.cc

Thus, I integrated it and so, the error of missing operand EXP disappears, but we arrive to the same point with the missing SUM operand.

libraries ready
STM32 Tensorflow Lite test
Model working
Interpreter working
Didn't find op for builtin opcode 'SUM' version '1'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?

Failed to get registration from op code SUM

Failed starting model allocation.

Therefore, I guess that the same procedure should be followed, to include the SUM operand, and I am not sure whether new faults of the same characteristics may rise.

Thank you very much, I stand by awainting your response.

@Saduf2019 Saduf2019 added TF 2.3 Issues related to TF 2.3 type:bug Bug labels Apr 19, 2021
@Saduf2019 Saduf2019 assigned Saduf2019 and ymodak and unassigned Saduf2019 Apr 19, 2021
@ymodak ymodak added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Apr 20, 2021
@ymodak ymodak removed their assignment Apr 20, 2021
@advaitjain advaitjain assigned petewarden and unassigned advaitjain Apr 22, 2021
@mohantym mohantym self-assigned this Nov 29, 2022
@mohantym
Copy link
Contributor

mohantym commented Nov 29, 2022

Hi @javierFerreroM !
We are checking to see whether you still need help in this issue .
Earlier, The work around was to add the op from OpResolver in the C++ code.

Attached similar issues for reference.

1, 2

Feel free to test documentation from tflite-micro repo and post on the same repo for further assistance.

Thank you!

@mohantym mohantym added stat:awaiting response Status - Awaiting response from author and removed stat:awaiting tensorflower Status - Awaiting response from tensorflower labels Nov 29, 2022
@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.

@google-ml-butler google-ml-butler bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label Dec 6, 2022
@google-ml-butler
Copy link

Closing as stale. Please reopen if you'd like to work on this further.

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:micro Related to TensorFlow Lite Microcontrollers stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.3 Issues related to TF 2.3 type:bug Bug
Projects
None yet
Development

No branches or pull requests

6 participants