-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Select ops are not supported in TFlite interpreter #2133
Comments
|
above ERROR: Select TensorFlow op(s), included in the given model, is(are) not supported by this interpreter. Make sure you apply/link the Flex delegate before inference. For the Android, it can be resolved by adding "org.tensorflow:tensorflow-lite-select-tf-ops" dependency. See instructions: https://www.tensorflow.org/lite/guide/ops_select
ERROR: Node number 1108 (FlexErf) failed to prepare.
ERROR: Select TensorFlow op(s), included in the given model, is(are) not supported by this interpreter. Make sure you apply/link the Flex delegate before inference. For the Android, it can be resolved by adding "org.tensorflow:tensorflow-lite-select-tf-ops" dependency. See instructions: https://www.tensorflow.org/lite/guide/ops_select
ERROR: Node number 1108 (FlexErf) failed to prepare. seems Tensorflow Error, Not an NNTrainer Error Does it work normally on your local computer? |
@DonghakPark Thanks for the pointers. |
I will test on my local environment ASAP and i will share with you! Are you using tf_funtion(with signature) on custom ops? |
@DonghakPark
we don't do anything. |
@DonghakPark |
@KirillP2323 |
We changed our backbone so it doesn't use the Erf function, and no custom ops are required. However, we encountered another problem while running SimpleShot with our backbone:
After some digging, I found that the error is thrown during this step of model initialization: https://github.com/nnstreamer/nntrainer/blob/main/nntrainer/graph/network_graph.cpp#L878 -> So it's getting stuck on |
As Error Message say, it look like dimension issue
please check this and if you still have problem with example please let me know |
@DonghakPark Thanks for the suggestions! |
yes. if you have to change the input to 3x228x228, then the input_shape needs to be 228:228:3. It needs to be NHWC because tflite uses NHWC format. If your backbone model was tested NCHW, then you can define input_shape as 3:228:228 in NCHW format. NNTrainer compare the input dimension provided by tflite API, interpter->inputs().
The error message is due the mismatch between tflite dimension and nntrainer input dimension. what is the error message? could you share it with us? does it produce the same error? |
The error message is the same, unfortunately:
There is a log from log_nntrainer_***.out, in case it's useful:
|
we add code to produce more information about the Input dimension mismatch as in #2151 , Could you run your application once again to see which dimension does not match. |
@jijoongmoon Great, that helped, thank you! I changed the
So the only option to match the dimension is to change the batch size of my tflite model from 16 to 1. Now I'm having another problem: the accuracy of the model is 20%, which is much lower than expected. I'll work on debugging the issue. Closing the bug, since initial issue is resolved. |
have you tried to change the batch size in {"batch_size=1", "epochs=1"}); @task_runner.cpp ? |
@jijoongmoon Yes, I tried that, and it didn't change the |
Hello, I'm opening the issue because our model needs the |
For NNtrainer logs, you can check comment 1, and here are logs from my tflite conversion:
|
@KirillP2323 OK then I have some question
|
@DonghakPark Yes, x86_64 and Ubuntu 20.04 |
@KirillP2323 |
Great, the application runs with Gelu model after following the instructions in #2193. Is it possible to add support for this to the installation process on the main branch? |
Yes. I will make some scripts with meson option |
I will Close this Issue |
I'm running a SimpleShot app with our custom ViT-based tflite backbone. I get the following error:
Here is our code where we transform our model to tflite with select ops:
The text was updated successfully, but these errors were encountered: