Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Didn't find op for builtin opcode 'CAST' version '1' #48720

Closed
RJRP44 opened this issue Apr 23, 2021 · 6 comments
Closed

Didn't find op for builtin opcode 'CAST' version '1' #48720

RJRP44 opened this issue Apr 23, 2021 · 6 comments
Assignees
Labels
comp:micro Related to TensorFlow Lite Microcontrollers stat:awaiting response Status - Awaiting response from author TF 2.4 for issues related to TF 2.4 type:bug Bug

Comments

@RJRP44
Copy link

RJRP44 commented Apr 23, 2021

@tensorflow/micro

System information

  • Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10, Platformio
  • TensorFlow installed from (source or binary): Platformio Library
  • Tensorflow version (commit SHA if source): TensorFlowLite_ESP32 0.9.0, model done on tensorflow 2.4
  • Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.): ESP32

Describe the problem

When i try to run a custom model i get this error :

Didn't find op for builtin opcode 'CAST' version '1'

Failed to get registration from op code  d

AllocateTensors() failed
Guru Meditation Error: Core  1 panic'ed (LoadProhibited). Exception was unhandled.
Core 1 register dump:
PC      : 0x400d17f4  PS      : 0x00060130  A0      : 0x800e5830  A1      : 0x3ffb1f80  
A2      : 0x3ffd7c70  A3      : 0x00000000  A4      : 0x000003e8  A5      : 0x3ffc03b8  
A6      : 0x00000008  A7      : 0x00000001  A8      : 0x800d17e3  A9      : 0x3ffb1f70  
A10     : 0x00000000  A11     : 0x447a0000  A12     : 0x3ffc044c  A13     : 0x3ffc1470  
A14     : 0x7f800000  A15     : 0x447a0000  SAR     : 0x0000001f  EXCCAUSE: 0x0000001c  
EXCVADDR: 0x00000004  LBEG    : 0x400014fd  LEND    : 0x4000150d  LCOUNT  : 0xffffffff  

ELF file SHA256: 0000000000000000

Backtrace: 0x400d17f4:0x3ffb1f80 0x400e582d:0x3ffb1fb0 0x4008627e:0x3ffb1fd0

Thanks

@RJRP44 RJRP44 added the comp:micro Related to TensorFlow Lite Microcontrollers label Apr 23, 2021
@RJRP44
Copy link
Author

RJRP44 commented Apr 24, 2021

TfLiteRegistration Register_CAST dosn't exist. How i can do without CAST ?

thanks

@tilakrayal tilakrayal added the TF 2.4 for issues related to TF 2.4 label Apr 26, 2021
@tilakrayal
Copy link
Contributor

@RJRP44 ,

In order to reproduce the issue reported here, could you please provide the complete code and the dataset you are using. Thanks!

@tilakrayal tilakrayal added type:bug Bug stat:awaiting response Status - Awaiting response from author labels Apr 26, 2021
@RJRP44
Copy link
Author

RJRP44 commented Apr 26, 2021

I used Arduino AI tutorial
My dataset is a custom one in 2 csv files.
in.csv
out.csv

My code is horrible i know.
Inputs of my AI are normaly all 64 pixels of a AMG8833 (Temperature sensor array) on 20 frames. And in output i need to get the direction of the movment (like a hand)

import tensorflow as tf
import pandas as pd
import numpy as np

print("TensorFlow version : ", tf.__version__)

DIRECTIONS = [
    "in",
    "out",
]

SAMPLES_PER_DIRECTION = 20

NUM_DIRECTIONS = len(DIRECTIONS)

ONE_HOT_ENCODED_DIRECTIONS = np.eye(NUM_DIRECTIONS)

inputs = []
outputs = []

# read each csv file and push an input and output
for direction_index in range(NUM_DIRECTIONS):
    direction = DIRECTIONS[direction_index]
    print(f"Processing index {direction_index} for direction '{direction}'.")

    output = ONE_HOT_ENCODED_DIRECTIONS[direction_index]

    df = pd.read_csv("input/" + direction + ".csv")

    # calculate the number of gesture recordings in the file
    num_recordings = int(df.shape[0] / SAMPLES_PER_DIRECTION)

    print(f"\tThere are {num_recordings} recordings of the {direction} direction.")

    print(print(df))

    for i in range(num_recordings):
        tensor = []
        for j in range(SAMPLES_PER_DIRECTION):
            index = i * SAMPLES_PER_DIRECTION + j

            tensor += [
                (df['p1'][index]),
                (df['p2'][index]),
                (df['p3'][index]),
                (df['p4'][index]),
                (df['p5'][index]),
                (df['p6'][index]),
                (df['p7'][index]),
                (df['p8'][index]),
                (df['p9'][index]),
                (df['p10'][index]),
                (df['p11'][index]),
                (df['p12'][index]),
                (df['p13'][index]),
                (df['p14'][index]),
                (df['p15'][index]),
                (df['p16'][index]),
                (df['p17'][index]),
                (df['p18'][index]),
                (df['p19'][index]),
                (df['p20'][index]),
                (df['p21'][index]),
                (df['p22'][index]),
                (df['p23'][index]),
                (df['p24'][index]),
                (df['p25'][index]),
                (df['p26'][index]),
                (df['p27'][index]),
                (df['p28'][index]),
                (df['p29'][index]),
                (df['p30'][index]),
                (df['p31'][index]),
                (df['p32'][index]),
                (df['p33'][index]),
                (df['p34'][index]),
                (df['p35'][index]),
                (df['p36'][index]),
                (df['p37'][index]),
                (df['p38'][index]),
                (df['p39'][index]),
                (df['p40'][index]),
                (df['p41'][index]),
                (df['p42'][index]),
                (df['p43'][index]),
                (df['p44'][index]),
                (df['p45'][index]),
                (df['p46'][index]),
                (df['p47'][index]),
                (df['p48'][index]),
                (df['p49'][index]),
                (df['p51'][index]),
                (df['p52'][index]),
                (df['p53'][index]),
                (df['p54'][index]),
                (df['p55'][index]),
                (df['p56'][index]),
                (df['p57'][index]),
                (df['p58'][index]),
                (df['p59'][index]),
                (df['p61'][index]),
                (df['p62'][index]),
                (df['p63'][index]),
                (df['p64'][index])
            ]

        inputs.append(tensor)
        outputs.append(output)

# convert the list to numpy array
inputs = np.array(inputs)
outputs = np.array(outputs)

print("Data set parsing and preparation complete.")

num_inputs = len(inputs)
randomize = np.arange(num_inputs)
np.random.shuffle(randomize)

# Swap the consecutive indexes (0, 1, 2, etc) with the randomized indexes
inputs = inputs[randomize]
outputs = outputs[randomize]

TRAIN_SPLIT = int(0.8 * num_inputs)
TEST_SPLIT = int(0.2 * num_inputs + TRAIN_SPLIT)

inputs_train, inputs_test, inputs_validate = np.split(inputs, [TRAIN_SPLIT, TEST_SPLIT])
outputs_train, outputs_test, outputs_validate = np.split(outputs, [TRAIN_SPLIT, TEST_SPLIT])

print("Data set randomization and splitting complete.")


model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(50, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(15, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(inputs, outputs, epochs=100)

predictions = model.predict(inputs_test)

# print the predictions and the expected ouputs
print("predictions =\n", np.round(predictions, decimals=3))
print("actual =\n", outputs_test)

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]  # , tf.lite.OpsSet.SELECT_TF_OPS]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tfmodel = converter.convert()
open("model.tflite","wb").write(tfmodel)

I converted my model.tflite to a header file using xxd -i model.tflite > model_header.cc
In my ESP32 code i tried to add CAST resolver.AddBuiltin(tflite::BuiltinOperator_CAST ,tflite::ops::micro::Register_CAST); but tflite::ops::micro::Register_CAST doesn't exist.

#include <TensorFlowLite_ESP32.h>
#include <Arduino.h>

#include "main_functions.h"

#include "constants.h"
#include "output_handler.h"
#include "model_data.h"
#include "tensorflow/lite/experimental/micro/kernels/micro_ops.h"
#include "tensorflow/lite/experimental/micro/micro_error_reporter.h"
#include "tensorflow/lite/experimental/micro/micro_interpreter.h"
#include "tensorflow/lite/experimental/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

#include <Melopero_AMG8833.h>

Melopero_AMG8833 sensor;

namespace
{
  tflite::ErrorReporter *error_reporter = nullptr;
  const tflite::Model *model = nullptr;
  tflite::MicroInterpreter *interpreter = nullptr;
  TfLiteTensor *input = nullptr;
  TfLiteTensor *output = nullptr;
  int inference_count = 0;

  const char *GESTURES[] = {
      "in",
      "out"};

#define NUM_GESTURES (sizeof(GESTURES) / sizeof(GESTURES[0]))

  constexpr int kTensorArenaSize = 90 * 1024;
  uint8_t tensor_arena[kTensorArenaSize];
}

void setup()
{

  Serial.begin(9600);

  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter = &micro_error_reporter;

  model = tflite::GetModel(model_data);
  if (model->version() != TFLITE_SCHEMA_VERSION)
  {
    error_reporter->Report(
        "Model provided is schema version %d not equal "
        "to supported version %d.",
        model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  static tflite::MicroMutableOpResolver resolver;

  // resolver.AddBuiltin(tflite::BuiltinOperator_CAST ,tflite::ops::micro::Register_CAST);

  resolver.AddBuiltin(tflite::BuiltinOperator_FULLY_CONNECTED, tflite::ops::micro::Register_FULLY_CONNECTED());

  resolver.AddBuiltin(tflite::BuiltinOperator_SOFTMAX, tflite::ops::micro::Register_SOFTMAX());

  static tflite::MicroInterpreter static_interpreter(model, resolver, tensor_arena, kTensorArenaSize, error_reporter);
  interpreter = &static_interpreter;

  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk)
  {
    error_reporter->Report("AllocateTensors() failed");
    return;
  }

  input = interpreter->input(0);
  output = interpreter->output(0);

  inference_count = 0;

  int statusCode = sensor.resetFlagsAndSettings();
  statusCode = sensor.setFPSMode(FPS_MODE::FPS_10);
}

const int numSamples = 20;
int samplesRead = numSamples;

void loop()
{

  int statusCode = sensor.updateThermistorTemperature();
  statusCode = sensor.updatePixelMatrix();

  for (int x = 0; x < 8; x++)
  {
    for (int y = 0; y < 8; y++)
    {
      if (sensor.pixelMatrix[y][x] > sensor.thermistorTemperature)
      {
        samplesRead = 0;
      }
    }
  }

  while (samplesRead < numSamples)
  {
    int statusCode = sensor.updateThermistorTemperature();
    statusCode = sensor.updatePixelMatrix();
    for (int x = 0; x < 8; x++)
    {
      for (int y = 0; y < 8; y++)
      {
        byte pixel = sensor.pixelMatrix[y][x] > sensor.thermistorTemperature;

        input->data.f[samplesRead * 64 + x * 8 + y] = pixel;
      }
    }
    samplesRead++;

    if (samplesRead == numSamples)
    {

      TfLiteStatus invokeStatus = interpreter->Invoke();
      if (invokeStatus != kTfLiteOk)
      {
        Serial.println("Invoke failed!");
        while (1)
          ;
        return;
      }

      for (int i = 0; i < NUM_GESTURES; i++)
      {
        Serial.print(GESTURES[i]);
        Serial.print(": ");
        Serial.println(output->data.f[i], 6);
      }
      Serial.println();
    }
  }
}

Thanks
(sorry my english is bad i am french)

@tilakrayal tilakrayal removed the stat:awaiting response Status - Awaiting response from author label Apr 26, 2021
@tilakrayal tilakrayal assigned rmothukuru and unassigned tilakrayal Apr 26, 2021
@rmothukuru rmothukuru added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Apr 26, 2021
@rmothukuru rmothukuru assigned terryheo and unassigned rmothukuru Apr 26, 2021
@mohantym mohantym self-assigned this Sep 27, 2022
@mohantym
Copy link
Contributor

mohantym commented Sep 27, 2022

Hi @RJRP44 !
Could you test with instructions from TFLite-micro-arduino repo.
Work around is add the op from OpResolver in the C++ code.

static tflite::MicroMutableOpResolver<1> micro_op_resolver; /* no of times I have added any ops through ops resolver
micro_op_resolver.AddCast(); /*Adding cast */

Reference.

Thank you!

@mohantym mohantym added stat:awaiting response Status - Awaiting response from author and removed stat:awaiting tensorflower Status - Awaiting response from tensorflower labels Sep 27, 2022
@terryheo terryheo assigned advaitjain and unassigned terryheo Sep 27, 2022
@advaitjain
Copy link
Member

I'm going to close this bug since it is quite old. Please feel free to create a new one at https://github.com/tensorflow/tflite-micro

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:micro Related to TensorFlow Lite Microcontrollers stat:awaiting response Status - Awaiting response from author TF 2.4 for issues related to TF 2.4 type:bug Bug
Projects
None yet
Development

No branches or pull requests

6 participants