Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting control flow in TensorFlow Lite #28485

Closed
miaout17 opened this issue May 7, 2019 · 44 comments
Closed

Supporting control flow in TensorFlow Lite #28485

miaout17 opened this issue May 7, 2019 · 44 comments
Assignees
Labels
comp:lite TF Lite related issues

Comments

@miaout17
Copy link
Contributor

miaout17 commented May 7, 2019

This is a tracking ticket for supporting generalized control flow in TensorFlow Lite. See also the RFC.

At this moment, if you see missing ops like "Switch", "Merge", "Enter", "Exit", "NextIteration" when converting a TensorFlow model to TensorFlow Lite, it means the graph contains control flows, and there's no way to convert it.

We're working hard to enable this feature. Updates will be posted here.

@Dayananda-V
Copy link
Contributor

@miaout17

When we can expect control flow in TensorFlow Lite?

@nikoliazekter
Copy link

nikoliazekter commented Jul 10, 2019

Is this really hard to implement? Because to me it looks like a very critical issue and the lack of attention seems quite weird. How else am I supposed to deploy my model to a mobile device? Are there any workarounds?

EDIT: I see that while op is registered here , can I somehow use it?

@nikoliazekter
Copy link

@miaout17 there were several commits recently like this one eedf79e but no updates were posted in this thread. Can you update us on when all these features will be available and in which tensorflow version (only 2.0 I assume)?

@jackyLens
Copy link

I met the same problem. It shows that "switch and merge" operations which in batchnormalization(bn) are not supported. However, I change the is_training from tf.bool to tf.Variable. This process are OK and the tflite model are produced sucessfully.
I do not know the reason, but it runs well in my programe.

@gxmdavid
Copy link

@jdduke Any updates on when the other control flow items will be available? Specifically Merge.

@ohjerm
Copy link

ohjerm commented Oct 30, 2019

@miaout17 any updates on this? It's been 6 months with no news

@git-hamza
Copy link

Any updates?

@haycuoi1007
Copy link

I had the same problem. Look forward to the update!

@sunzhe09
Copy link

sunzhe09 commented Nov 4, 2019

I met the same problem

@aviaisr
Copy link

aviaisr commented Nov 27, 2019

I had the same issue.

2019-11-27 09:27:26.184856: F .\tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Found StridedSlice as non-selected output from Switch, but only Merge supported. Control flow ops like Switch and Merge are not generally supported. We are working on fixing this, please see the Github issue at #28485.
Fatal Python error: Aborted

@aviaisr
Copy link

aviaisr commented Nov 28, 2019

I had the same issue when I tried to convert SSDlite mobileNet to tflite.
Any updates?

@terryheo terryheo added the comp:lite TF Lite related issues label Dec 3, 2019
@jackyLens
Copy link

I met the same problem

@shouhu666
Copy link

F ./tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Found StridedSlice as non-selected output from Switch, but only Merge supported. Control flow ops like Switch and Merge are not generally supported. We are working on fixing this, please see the Github issue at #28485.
is it ok now?

@j20232
Copy link

j20232 commented Dec 24, 2019

I met the same problem. Look forward to the update.

@2696120622
Copy link

I used my converted tflite model for inference with the following code.
interpreter.allocate_tensors()
Then I got an error as follows.
RuntimeError: Encountered unresolved custom op: Enter.Node number 8 (Enter) failed to prepare
Is it occured for unsupported control flow?
If so, which layers does the Enter control flow come from?

@haozha111
Copy link
Contributor

I used my converted tflite model for inference with the following code.
interpreter.allocate_tensors()
Then I got an error as follows.
RuntimeError: Encountered unresolved custom op: Enter.Node number 8 (Enter) failed to prepare
Is it occured for unsupported control flow?
If so, which layers does the Enter control flow come from?

Which model are you trying to convert? The Enter is a control flow v1 op, which tflite new converter doesn't support. Can you build the model with TF 2 so that it contains V2 style control flow?

@2696120622
Copy link

I used my converted tflite model for inference with the following code.
interpreter.allocate_tensors()
Then I got an error as follows.
RuntimeError: Encountered unresolved custom op: Enter.Node number 8 (Enter) failed to prepare
Is it occured for unsupported control flow?
If so, which layers does the Enter control flow come from?

Which model are you trying to convert? The Enter is a control flow v1 op, which tflite new converter doesn't support. Can you build the model with TF 2 so that it contains V2 style control flow?

@haozha111 I am trying to convert EfficientDet in https://github.com/xuannianz/EfficientDet, which is based on fizyr/keras-retinanet in https://github.com/fizyr/keras-retinanet. Does TF 2 support enter, exit, switch and merge?
Thanks.

@haozha111
Copy link
Contributor

@haozha111
Sure. What is your e-mail address? And which release should I upgrade to?
Thanks a lot.

Rather than making this thread longer, can you please open a new Github issue, and attach your current error messages, your original TF model and the reproduce steps there? That will help us better triage the issue. Thanks.

@bazako
Copy link

bazako commented Jan 22, 2020

I met the same problem. I am using a batch normalization layer (tf.layer.batch_normalization).

2020-01-22 15:33:09.575787: F ./tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Found Mul as non-selected output from Switch, but only Merge supported. Control flow ops like Switch and Merge are not generally supported. We are working on fixing this, please see the Github issue at #28485.

@haozha111
Copy link
Contributor

I met the same problem. I am using a batch normalization layer (tf.layer.batch_normalization).

2020-01-22 15:33:09.575787: F ./tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Found Mul as non-selected output from Switch, but only Merge supported. Control flow ops like Switch and Merge are not generally supported. We are working on fixing this, please see the Github issue at #28485.
You need the new MLIR converter to convert models with control flow. Please update your tensorflow version to the latest and then use the new converter(converter.experimental_new_converter = True).
#28485 (comment)

@bazako
Copy link

bazako commented Jan 24, 2020

I met the same problem. I am using a batch normalization layer (tf.layer.batch_normalization).
2020-01-22 15:33:09.575787: F ./tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Found Mul as non-selected output from Switch, but only Merge supported. Control flow ops like Switch and Merge are not generally supported. We are working on fixing this, please see the Github issue at #28485.
You need the new MLIR converter to convert models with control flow. Please update your tensorflow version to the latest and then use the new converter(converter.experimental_new_converter = True).
#28485 (comment)

Hi!
I am using tf 1.15 to train the nnet.
To use the new MLIR converter it is necessary to have a pb file. In tf 1.15 load the model from checkpoint and save in "saved_model.pb". [I don't know how to do in tf2.1, some idea¿?]

Then in tf 2.1 I use the following:

converter = tf.lite.TFLiteConverter.from_saved_model(save_path)
converter.experimental_new_converter = True
tflite_model = converter.convert()

And the result is:

ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 0 MetaGraphs in the SavedModel with tag sets []. Pass a 'tags=' argument to load this SavedModel.

And if I check the tags with saved_model_cli._show_tag_sets(model_path):

The results is:
The given SavedModel contains the following tag-sets:

Another case, If I froze the model, with tensorflow.python.tools import freeze_graph in tf2.1, and save in "saved_model.pb".

And convert with the same previous code I get:

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

@ragnariock
Copy link

any update?

@haozha111
Copy link
Contributor

I met the same problem. I am using a batch normalization layer (tf.layer.batch_normalization).
2020-01-22 15:33:09.575787: F ./tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Found Mul as non-selected output from Switch, but only Merge supported. Control flow ops like Switch and Merge are not generally supported. We are working on fixing this, please see the Github issue at #28485.
You need the new MLIR converter to convert models with control flow. Please update your tensorflow version to the latest and then use the new converter(converter.experimental_new_converter = True).
#28485 (comment)

Hi!
I am using tf 1.15 to train the nnet.
To use the new MLIR converter it is necessary to have a pb file. In tf 1.15 load the model from checkpoint and save in "saved_model.pb". [I don't know how to do in tf2.1, some idea¿?]

Then in tf 2.1 I use the following:

converter = tf.lite.TFLiteConverter.from_saved_model(save_path)
converter.experimental_new_converter = True
tflite_model = converter.convert()

And the result is:

ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 0 MetaGraphs in the SavedModel with tag sets []. Pass a 'tags=' argument to load this SavedModel.

And if I check the tags with saved_model_cli._show_tag_sets(model_path):

The results is:
The given SavedModel contains the following tag-sets:

Another case, If I froze the model, with tensorflow.python.tools import freeze_graph in tf2.1, and save in "saved_model.pb".

And convert with the same previous code I get:

ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

That's a bit surprising. How did you export your saved model? Could you use the saved_model_cli --show tool to dump an info about your saved model? I think by default there should be a serving graph inside it.

@bazako
Copy link

bazako commented Feb 8, 2020

I met the same problem. I am using a batch normalization layer (tf.layer.batch_normalization).
2020-01-22 15:33:09.575787: F ./tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Found Mul as non-selected output from Switch, but only Merge supported. Control flow ops like Switch and Merge are not generally supported. We are working on fixing this, please see the Github issue at #28485.
You need the new MLIR converter to convert models with control flow. Please update your tensorflow version to the latest and then use the new converter(converter.experimental_new_converter = True).
#28485 (comment)

Hi!
I am using tf 1.15 to train the nnet.
To use the new MLIR converter it is necessary to have a pb file. In tf 1.15 load the model from checkpoint and save in "saved_model.pb". [I don't know how to do in tf2.1, some idea¿?]
Then in tf 2.1 I use the following:
converter = tf.lite.TFLiteConverter.from_saved_model(save_path)
converter.experimental_new_converter = True
tflite_model = converter.convert()
And the result is:
ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 0 MetaGraphs in the SavedModel with tag sets []. Pass a 'tags=' argument to load this SavedModel.
And if I check the tags with saved_model_cli._show_tag_sets(model_path):
The results is:
The given SavedModel contains the following tag-sets:

Another case, If I froze the model, with tensorflow.python.tools import freeze_graph in tf2.1, and save in "saved_model.pb".
And convert with the same previous code I get:
ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.

That's a bit surprising. How did you export your saved model? Could you use the saved_model_cli --show tool to dump an info about your saved model? I think by default there should be a serving graph inside it.

Hi Haoliang!

I follow your steps and other recomendations in the forums. But now I get an strange error:

error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<?x?x?x?xf32>')
error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<?x?x?x?xf32>')
error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type `('tensor<?x?x?x?xf32>')

The code that I use is:

First, to pass from CheckPoint to SavedModel:

import os
import tensorflow as tf

trained_checkPoint_prefix="model-32000"
trained_checkPoint_dir="nnet/"

loaded_graph = tf.Graph()
tf.compat.v1.enable_resource_variables()
with tf.Session(graph=loaded_graph) as sess:
    loader = tf.train.import_meta_graph(trained_checkPoint_dir+trained_checkPoint_prefix+'.meta')
    loader.restore(sess,trained_checkPoint_dir+trained_checkPoint_prefix)
   
    # Export Checkpoint to SavedModel
    builder = tf.saved_model.builder.SavedModelBuilder(trained_checkPoint_dir+'/v01')
   
    features = loaded_graph.get_tensor_by_name("features:0")
    is_training = loaded_graph.get_tensor_by_name("is_training:0")

    output = loaded_graph.get_tensor_by_name("tower_0/tdnn/batch_normalization_5/batchnorm/add_1 :0") 

    builder.add_meta_graph_and_variables(sess, ["tag"], signature_def_map= {
    "model": tf.saved_model.signature_def_utils.predict_signature_def(
        inputs= {"features": features, "is_training":is_training},
        outputs= {"finalnode": output})
    })

    builder.save()

Second Step, Convert to Tensorflow Lite in 2.1 (or tf-nightly):

import os
import tensorflow as tf

trained_checkPoint_dir="nnet/"
freezeModel = trained_checkPoint_dir + "v01"

# Load the SavedModel.
saved_model_obj = tf.saved_model.load(export_dir=freezeModel,tags='tag')

# Load the specific concrete function from the SavedModel.
concrete_func = saved_model_obj.signatures['model']
print(concrete_func.inputs[0])

# Set the shape of the input in the concrete function.
if concrete_func.inputs[0].name=='features:0':
    concrete_func.inputs[0].set_shape([1,1000,23])
else:
    concrete_func.inputs[1].set_shape([1,1000,23])

# Convert the model to a TFLite model.
converter =  tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
                                   tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()

The error that I get is:


ConverterError                            Traceback (most recent call last)
<ipython-input-14-0f34bb0a99ac> in <module>
      4 converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
      5                                        tf.lite.OpsSet.SELECT_TF_OPS]
----> 6 tflite_model = converter.convert()

~/anaconda3/envs/py36tf2.1/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py in convert(self)
    462         input_tensors=input_tensors,
    463         output_tensors=output_tensors,
--> 464         **converter_kwargs)
    465 
    466     if self._is_calibration_quantize():

~/anaconda3/envs/py36tf2.1/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, enable_mlir_converter, *args, **kwargs)
    455       input_data.SerializeToString(),
    456       debug_info_str=debug_info_str,
--> 457       enable_mlir_converter=enable_mlir_converter)
    458   return data
    459 

~/anaconda3/envs/py36tf2.1/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    201       stdout = _try_convert_to_unicode(stdout)
    202       stderr = _try_convert_to_unicode(stderr)
--> 203       raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
    204   finally:
    205     # Must manually cleanup files.

ConverterError: See console for info.
2020-02-05 13:22:17.349230: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64
2020-02-05 13:22:17.349295: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.0/lib64
2020-02-05 13:22:17.349304: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2020-02-05 13:22:17.812215: W tensorflow/compiler/mlir/lite/python/graphdef_to_tfl_flatbuffer.cc:89] Ignored output_format.
2020-02-05 13:22:17.812244: W tensorflow/compiler/mlir/lite/python/graphdef_to_tfl_flatbuffer.cc:95] Ignored drop_control_dependency.
2020-02-05 13:22:17.870309: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-02-05 13:22:17.896653: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3300080000 Hz
2020-02-05 13:22:17.897563: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7efcc41ffdc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-02-05 13:22:17.897600: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-02-05 13:22:17.902733: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-02-05 13:22:17.922362: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2020-02-05 13:22:17.922431: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (invest-rtx2080): /proc/driver/nvidia/version does not exist
error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<?x?x?x?xf32>')
error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<?x?x?x?xf32>')
error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<?x?x?x?xf32>')
Traceback (most recent call last):
  File "/home/investigacion/anaconda3/envs/py36tf2.1/bin/toco_from_protos", line 8, in <module>
    sys.exit(main())
  File "/home/investigacion/anaconda3/envs/py36tf2.1/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 93, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/investigacion/anaconda3/envs/py36tf2.1/lib/python3.6/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/investigacion/anaconda3/envs/py36tf2.1/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/investigacion/anaconda3/envs/py36tf2.1/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/investigacion/anaconda3/envs/py36tf2.1/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 56, in execute
enable_mlir_converter)
Exception: <unknown>:0: error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<?x?x?x?xf32>')
<unknown>:0: error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<?x?x?x?xf32>')
<unknown>:0: error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<?x?x?x?xf32>')

In this link I save the checkpoint if you want check it or repruduce the error.

Also I write in starckoverflow about this error in the next link

@limsijie93
Copy link

Hi

I tried to convert my model in TF 1.15 into .tflite using the following:

converter = tf.lite.TFLiteConverter.from_frozen_graph(model_pb_path, input_arrays, output_arrays,input_shapes)
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]

However, I encountered the following error:

TensorFlow Lite currently doesn't support control flow ops: Merge, Switch. We are working on supporting control flow ops, please see github issue at https://github.com/tensorflow/tensorflow/issues/28485. Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, ARG_MAX, CAST, CONCATENATION, CONV_2D, DIV, EQUAL, EXP, EXPAND_DIMS, FULLY_CONNECTED, GATHER, GATHER_ND, GREATER, GREATER_EQUAL, LESS, LOG, LOGICAL_AND, LOGISTIC, MAXIMUM, MAX_POOL_2D, MINIMUM, MUL, PACK, PAD, PADV2, RANGE, REDUCE_MAX, RESHAPE, RESIZE_NEAREST_NEIGHBOR, ROUND, SHAPE, SOFTMAX, SPARSE_TO_DENSE, SPLIT, SQRT, SQUEEZE, STRIDED_SLICE, SUB, SUM, TOPK_V2, TRANSPOSE_CONV, UNIQUE, WHERE. Here is a list of operators for which you will need custom implementations: DenseToDenseSetOperation.

I'm not calling any DenseToDenseSetOperation explicitly. Is this part of the backend engine that is used to convert the tf functions into .tflite?

@2696120622
Copy link

2696120622 commented Mar 4, 2020

TensorListStack

It seems that you are falling back to the old converter and not using the new one. It may due to that the tensorflow package you installed is a bit outdated. Could you share your model file with me so I can take a look? Thanks. (Or either, you can upgrade your tensorflow environment to the latest stable release, and then try it out)

@haozha111
I have update my tf to 2.1. However, I still encountered the same error:

#37003 (comment)

RuntimeError: Encountered unresolved custom op: TensorListFromTensor.Node number 892 (TensorListFromTensor) failed to prepare.
According to your comment:

#28485 (comment)
Maybe I need the new MLIR-based TF Lite converter. How to get this new MLIR-based TF Lite converter?
My project is here.

https://drive.google.com/open?id=15i6d9yJ-OAcPROiGEs-RWmG-v6KyQauK
You can train one model with train_test.ipynb and convert it with convert_model_test.ipynb.
Thanks a lot!

@sdu2011
Copy link

sdu2011 commented Apr 1, 2020

when would control flow ops: Merge, Switch be supported

@faizanf47
Copy link

Any Luck?

@MichaelJayW
Copy link

I had the same problem. Looking forward to your update!

@MichaelJayW
Copy link

I had the same problem. Looking forward to your update!

In my case I removed tf.bool variable and upgrade TF from 1.12.0 to 1.14.0 both tflite_convert and load model works!

@karimnosseir
Copy link
Contributor

With new converter now the default. TFLite should support Control Flow V2 ops.
Please if you have a model with control flow ops, enable control flow v2.
to enable Control flow v2:
tf.enable_control_flow_v2()

If you have problems, please file an issue with reproduce steps.

Thanks

@e2r-htz
Copy link

e2r-htz commented Sep 3, 2020

tf.enable_control_flow_v2()

I am trying to convert a Tensorflow model to TFLite as follows

tf.enable_control_flow_v2()
converter = tf.lite.TFLiteConverter.from_frozen_graph('model.pb', #TensorFlow freezegraph .pb model file
                                                      input_arrays=['input'], # name of input arrays as defined in torch.onnx.export function before.
                                                      output_arrays=['output']  # name of output arrays defined in torch.onnx.export function before.
                                                      )
tf_lite_model = converter.convert()

but I still get the same error

2020-09-03 11:48:14.360640: F ./tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Found ResizeBilinear as non-selected output from Switch, but only Merge supported. Control flow ops like Switch and Merge are not generally supported. We are working on fixing this, please see the Github issue at https://github.com/tensorflow/tensorflow/issues/28485. Fatal Python error: Aborted

Using Tensorflow 1.15

@karimnosseir
Copy link
Contributor

@e2r-htz You need to add this line "tf.enable_control_flow_v2()" not during converting but while creating/freezing your model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues
Projects
None yet
Development

No branches or pull requests