Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot convert VGG19 #29124

Closed
marcodelmoral opened this issue May 29, 2019 · 9 comments
Closed

Cannot convert VGG19 #29124

marcodelmoral opened this issue May 29, 2019 · 9 comments
Assignees
Labels
comp:lite TF Lite related issues TF 2.0 Issues relating to TensorFlow 2.0 type:support Support issues

Comments

@marcodelmoral
Copy link

marcodelmoral commented May 29, 2019

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Ubuntu 18.04
  • TensorFlow installed from (source or binary): pip
  • TensorFlow version (or github SHA if from source): tf-nightly-2.0-preview 2.0.0.dev20190529

Provide the text output from tflite_convert

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: CONV_2D, FULLY_CONNECTED, MAX_POOL_2D, SOFTMAX. Here is a list of operators for which you will need custom implementations: IdentityN.
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 256, 256, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 256, 256, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 256, 256, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 128, 128, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 128, 128, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 128, 128, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 64, 64, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 64, 64, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 64, 64, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 64, 64, 256)       590080    
_________________________________________________________________
block3_conv4 (Conv2D)        (None, 64, 64, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 32, 32, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 32, 32, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block4_conv4 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 16, 16, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_conv4 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 8, 8, 512)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 32768)             0         
_________________________________________________________________
dense_3 (Dense)              (None, 2048)              67110912  
_________________________________________________________________
dropout_2 (Dropout)          (None, 2048)              0         
_________________________________________________________________
dense_4 (Dense)              (None, 1024)              2098176   
_________________________________________________________________
dropout_3 (Dropout)          (None, 1024)              0         
_________________________________________________________________
dense_5 (Dense)              (None, 2)                 2050      
=================================================================
Total params: 89,235,522
Trainable params: 69,211,138
Non-trainable params: 20,024,384
_________________________________________________________________

Any other info / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.


---------------------------------------------------------------------------
ConverterError                            Traceback (most recent call last)
<ipython-input-3-7f2e78be88e7> in <module>
      1 converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(os.path.join('fold_0', 'saved_model'))
      2 converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
----> 3 tflite_quant_model = converter.convert()
      4 open("converted_model.tflite", "wb").write(tflite_quant_model)

~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/lite/python/lite.py in convert(self)
    898           input_tensors=self._input_tensors,
    899           output_tensors=self._output_tensors,
--> 900           **converter_kwargs)
    901     else:
    902       result = _toco_convert_graph_def(

~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/lite/python/convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, *args, **kwargs)
    402   data = toco_convert_protos(model_flags.SerializeToString(),
    403                              toco_flags.SerializeToString(),
--> 404                              input_data.SerializeToString())
    405   return data
    406 

~/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str)
    170       stderr = _try_convert_to_unicode(stderr)
    171       raise ConverterError(
--> 172           "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
    173   finally:
    174     # Must manually cleanup files.

ConverterError: TOCO failed. See console for info.
2019-05-29 05:12:55.314629: I tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: IdentityN
2019-05-29 05:12:55.329766: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 205 operators, 248 arrays (0 quantized)
2019-05-29 05:12:55.331674: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 205 operators, 248 arrays (0 quantized)
2019-05-29 05:12:55.406663: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 28 operators, 68 arrays (0 quantized)
2019-05-29 05:12:56.019719: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 27 operators, 67 arrays (0 quantized)
2019-05-29 05:12:56.019942: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 3: 26 operators, 65 arrays (0 quantized)
2019-05-29 05:12:56.020127: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 26 operators, 65 arrays (0 quantized)
2019-05-29 05:12:56.020257: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 26 operators, 65 arrays (0 quantized)
2019-05-29 05:12:56.020521: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 33554432 bytes, theoretical optimal value: 33554432 bytes.
2019-05-29 05:12:56.020581: I tensorflow/lite/toco/toco_tooling.cc:434] Estimated count of arithmetic ops: 51.1266 billion (note that a multiply-add is counted as 2 ops).
2019-05-29 05:12:56.020770: E tensorflow/lite/toco/toco_tooling.cc:462] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: CONV_2D, FULLY_CONNECTED, MAX_POOL_2D, SOFTMAX. Here is a list of operators for which you will need custom implementations: IdentityN.
Traceback (most recent call last):
  File "/home/marco/anaconda3/envs/tf2/bin/toco_from_protos", line 10, in <module>
    sys.exit(main())
  File "/home/marco/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/marco/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/marco/anaconda3/envs/tf2/lib/python3.6/site-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/home/marco/anaconda3/envs/tf2/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/home/marco/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:
@lukasfolle
Copy link
Contributor

You might consider converting the op IdentityN with Tensorflow light operation select
You can specify operations, that can't be used in tf.lite directly like this:

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model) link

Hope this helps.

@achandraa
Copy link

@marcojulioarg : Please have a look on @lufol's suggestion and let us know if you are able to proceed further. Thanks!

@achandraa achandraa added the stat:awaiting response Status - Awaiting response from author label May 30, 2019
@marcodelmoral
Copy link
Author

Hello, I was able to save the model now. I made to testing on loading it or making an inference, but it converted. Ill try the model next, thank you very much!

@achandraa
Copy link

Sure. Let us know if you are stuck. Thanks!

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label May 31, 2019
@achandraa
Copy link

Closing the issue since it looks to be resolved. Please feel free to open another ticket if you are stuck. Thanks!

@hanjunyi
Copy link

hanjunyi commented Aug 19, 2019

You can try this:
converter.allow_custom_ops=True

For me, it works.

@miaout17
Copy link
Contributor

The original reported issue should be resolved by commit aca2430.
Could you try it again with the next TF nightly build? Thanks!

@hadizand
Copy link

hadizand commented Nov 1, 2019

You can try this:
converter.allow_custom_ops=True

For me, it works.

By setting allow_custom_ops flag True, it can generate the *.tflite. But, have you tried this *.tflite on android app to see if it works? Or it only shifts the issue one step ahead?

@gunjanddave
Copy link

@hadizand You are correct it just shifts the issue one step ahead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues TF 2.0 Issues relating to TensorFlow 2.0 type:support Support issues
Projects
None yet
Development

No branches or pull requests

8 participants