New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Didn't find op for builtin opcode 'SUM' version '1'. #48431
Comments
@javierFerreroM |
Good morning @Saduf2019 , I just tested both versions and in both we encounter the same issue. The project is generated out of the hello_world example using the following command:
I used the same model.cc file for the three TF Tiny versions ( I am assuming that the model is not the issue, but the missing libraries). An interesting point I'd like to highlight is that I have checked that in version 2.5rc1, exp.cc is already present in the kernels folder, and it is already integrated in the make file. The only missing part is the AddExp(); that I added by hand to the All_Ops_resolver.cc Thus, I integrated it and so, the error of missing operand EXP disappears, but we arrive to the same point with the missing SUM operand.
Therefore, I guess that the same procedure should be followed, to include the SUM operand, and I am not sure whether new faults of the same characteristics may rise. Thank you very much, I stand by awainting your response. |
Hi @javierFerreroM ! Attached similar issues for reference. Feel free to test documentation from tflite-micro repo and post on the same repo for further assistance. Thank you! |
This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you. |
Closing as stale. Please reopen if you'd like to work on this further. |
@tensorflow/micro
System information
Describe the problem
In the following lines I proceed to describe the steps followed:
As seen in the message, dims are correct, type input is correct (Float) and everything else seems to be working fine.
Prior to this issue, a similar fault appeared as it requested to have the op code EXP. This one is already available in Tensorflow Git and it has been already implemented. (exp.cc, exp_test.cc, exp.h, AddEpx() included in AllOps file and make file modified).
The same procedure has been handled for the MobileNetV2 with no modifications and it worked fine.
I have two main questions:
Please provide the exact sequence of commands/steps when you ran into the problem
Model configuration in python (Tensorflow + Keras API):
model = MobileNetV2(include_top=False,weights=None,input_tensor=Input(shape = (512,640,channels) ,dtype = 'float32'),pooling = None,classes = len(class_labels))
last = Conv2D(filters = 5,kernel_size = 3, padding= 'valid',strides=(1,1),activation='softmax', input_shape = (model.layers[-1].output.shape))(model.layers[-1].output)
model = Model(model.input, last)
Model Conversion from Tensorflow to Tensorflow Lite:
converter = tf.lite.TFLiteConverter.from_keras_model(loaded_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
model_no_quant_tflite = converter.convert()
Model Conversion from Tensorflow Lite to Tensorflow Tiny:
xxd -i model_mobilenet_sliding2.tflite > model.cc
Model loading in Tensorflow Tiny (C++):
namespace{ tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
// This pulls in all the operation implementations we need
tflite::AllOpsResolver resolver;
constexpr int kTensorArenaSize = 4 * 1024 * 1024;
uint8_t tensor_arena[kTensorArenaSize];
uint8_t* img = nullptr;
uint64_t timeImage;
uint64_t timeInvoke;
TfLiteTensor* model_input = nullptr;
TfLiteTensor* model_output = nullptr; }
// Set up logging (modify tensorflow/lite/micro/debug_log.cc) static tflite::MicroErrorReporter micro_error_reporter; error_reporter = µ_error_reporter;
// Say something to test error reporter error_reporter->Report("STM32 Tensorflow Lite test");
model = ::tflite::GetModel(model_mobilenet_sliding2_tflite); cout << "Model working"<<endl;
tflite::MicroInterpreter interpreter(model, resolver, tensor_arena, kTensorArenaSize, µ_error_reporter);
cout << "Interpreter working" << endl;
interpreter.AllocateTensors();
cout << "Allocate working" << endl;
model_input = interpreter.input(0);
// Get image from provider.
cout << "Type input: " << model_input->type << endl;
cout << "Bytes input: " << model_input->bytes << endl;
cout << "Size input: " << model_input->dims->size << endl;
cout << "Dim input 0: " << model_input->dims->data[0] << endl;
cout << "Dim input 1: " <<model_input->dims->data[1] << endl;
cout << "Dim input 2: " <<model_input->dims->data[2] << endl;
cout << "Dim input 3: " <<model_input->dims->data[3] << endl;
The text was updated successfully, but these errors were encountered: