Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Breaking change of TFLite model definition in TensorFlow 2.4.0: OperatorCode.BuiltinCode #46663

Closed
zhenhuaw-me opened this issue Jan 25, 2021 · 5 comments
Assignees
Labels
comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 type:bug Bug

Comments

@zhenhuaw-me
Copy link
Contributor

zhenhuaw-me commented Jan 25, 2021

(source issue: zhenhuaw-me/tflite#9)

This change breaks software stacks that depend on the built TFLite model parser, e.g. tvm, tflite2onnx, and etc.

Starting from TensorFlow 2.4.0, the BuiltinOperator switches to int32 from byte. As a result, the OperatorCode.builtin_code is now an int32 too, and code like op_code.BuiltinCode() is broken.

// An OperatorCode can be an enum value (BuiltinOperator) if the operator is a
// builtin, or a string if the operator is custom.
table OperatorCode {
  // This field is for backward compatibility. This field will be used when
  // the value of the extended builtin_code field has less than
  // BulitinOperator_PLACEHOLDER_FOR_GREATER_OP_CODES.
  deprecated_builtin_code:byte;
  custom_code:string;

  // The version of the operator. The version need to be bumped whenever new
  // parameters are introduced into an op.
  version:int = 1;

  // This field is introduced for resolving op builtin code shortage problem
  // (the original BuiltinOperator enum field was represented as a byte).
  // This field will be used when the value of the extended builtin_code field
  // has greater than BulitinOperator_PLACEHOLDER_FOR_GREATER_OP_CODES.
  builtin_code:BuiltinOperator;
}

For any code that uses op_code.BuiltinCode(), a workaround like the below (or this PR) is needed.

if op_code.BuiltinCode() < BuiltinOperator.PLACEHOLDER_FOR_GREATER_OP_CODES:
    opc = op_code.DeprecatedBuiltinCode()
else:
    opc = op_code.BuiltinCode()
@abattery
Copy link
Contributor

abattery commented Jan 25, 2021

Sorry for encountering this issue in your side. Since TFLite builtin operator set is getting bigger, we need to roll this change unluckily. We provide the following helpers to adopt this change:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema_conversion_utils.cc
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema_utils.cc
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/python/schema_util.py

Please take a look at them and I hope them help resolving your issue.

Here is a link to describe some backgrounds of the behavior change: tensorflow/community#285

@abattery abattery added the comp:lite TF Lite related issues label Jan 26, 2021
@zhenhuaw-me
Copy link
Contributor Author

@abattery Thank you for the details! I understand that is something that can hardly avoid when the system involves... So by design, for any systems that rely on a TFLite model (or the model representation), they should wrap up the calling to BuiltinCode like this with the example handlers you listed? If that's the case, maybe we can do something (maybe this) in tflite (a python parser package) to avoid such change in the software system stack.

@Saduf2019 Saduf2019 added the TF 2.4 for issues related to TF 2.4 label Jan 27, 2021
@Saduf2019 Saduf2019 assigned ymodak and unassigned Saduf2019 Jan 27, 2021
@ymodak ymodak added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jan 27, 2021
@ymodak ymodak assigned abattery and unassigned ymodak Jan 27, 2021
@abattery
Copy link
Contributor

abattery commented Jan 27, 2021

@jackwish we would do that but we are depending the flatbuffer library to generate the interfaces which are done by the codegen approach so it is very tricky to do that.

Maybe we can wrap again with the result of the codegen by introducing a new layer. However, it will require a tons of refactoring and it may not reflect the actual model structure as well.

Thanks for the suggestion!

@zhenhuaw-me
Copy link
Contributor Author

zhenhuaw-me commented Jan 28, 2021

@abattery Thanks for your reply. Refactoring for such a case seems can be avoided. I can work around it in the python parsing package (done in v2.4.0).

Now closing as I have got the answer. Thanks!

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

zhenhuaw-me added a commit to zhenhuaw-me/tflite that referenced this issue Nov 12, 2022
`tflite.OperatorCode.BuiltinCode()`: maintains API compability in 2.4.0.
See these threads:
* #9
* #10
* tensorflow/tensorflow#46663
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 type:bug Bug
Projects
None yet
Development

No branches or pull requests

4 participants