Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No module named '_tensorflow_wrap_toco' #22617

Closed
B-C-WANG opened this issue Sep 29, 2018 · 13 comments
Closed

No module named '_tensorflow_wrap_toco' #22617

B-C-WANG opened this issue Sep 29, 2018 · 13 comments
Assignees
Labels
comp:lite TF Lite related issues

Comments

@B-C-WANG
Copy link

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Windows 10
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary):
    binary (through pip)
  • TensorFlow version (use command below):
    1.11.0
  • Python version:
    3.6.4
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
    None (CPU only)
  • GPU model and memory:
    None
  • Exact command to reproduce:

Describe the problem

I saved my keras model to file and tried to use "lite.TFLiteConverter.from_keras_model_file(...)" and "tflite_model = converter.convert()" to get a lite model, but got error "No module named '_tensorflow_wrap_toco'". This is only a "tensorflow_wrap_toco.py" in "\tensorflow\contrib\lite\toco\python", and no "_tensorflow_wrap_toco" in that file.
I have updated my tensorflow through "pip install tensorflow --upgrade"

Source code / logs

model= get_testing_model(input_shape=(160,140)) 
model.load_weights(keras_weights_file)
model.save("kerasModel.h5")
converter = lite.TFLiteConverter.from_keras_model_file("kerasModel.h5")
tflite_model = converter.convert() # bug happens here
open("converted_model.tflite", "wb").write(tflite_model)

Logs:
FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-09-29 21:03:55.936260: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
WARNING:tensorflow:No training configuration found in save file: the model was not compiled. Compile it manually.
Traceback (most recent call last):
File "C:/Users/wang/Desktop/OpenPoseApp/camera-openpose-keras/demo_camera.py", line 325, in
save_tf_lite_model()
File "C:/Users/wang/Desktop/OpenPoseApp/camera-openpose-keras/demo_camera.py", line 311, in save_tf_lite_model
tflite_model = converter.convert()
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\contrib\lite\python\lite.py", line 453, in convert
**converter_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\contrib\lite\python\convert.py", line 342, in toco_convert_impl
input_data.SerializeToString())
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\contrib\lite\python\convert.py", line 135, in toco_convert_protos
(stdout, stderr))
RuntimeError: TOCO failed see console for info.
b'c:\programdata\anaconda3\lib\site-packages\h5py\init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.\r\n from ._conv import register_converters as _register_converters\r\nTraceback (most recent call last):\r\n File "c:\programdata\anaconda3\lib\site-packages\tensorflow\contrib\lite\toco\python\tensorflow_wrap_toco.py", line 18, in swig_import_helper\r\n fp, pathname, description = imp.find_module('_tensorflow_wrap_toco', [dirname(file)])\r\n File "c:\programdata\anaconda3\lib\imp.py", line 297, in find_module\r\n raise ImportError(_ERR_MSG.format(name), name=name)\r\nImportError: No module named '_tensorflow_wrap_toco'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File "c:\programdata\anaconda3\lib\runpy.py", line 193, in _run_module_as_main\r\n "main", mod_spec)\r\n File "c:\programdata\anaconda3\lib\runpy.py", line 85, in _run_code\r\n exec(code, run_globals)\r\n File "C:\ProgramData\Anaconda3\Scripts\toco_from_protos.exe\main.py", line 5, in \r\n File "c:\programdata\anaconda3\lib\site-packages\tensorflow\contrib\lite\toco\python\toco_from_protos.py", line 22, in \r\n from tensorflow.contrib.lite.toco.python import tensorflow_wrap_toco\r\n File "c:\programdata\anaconda3\lib\site-packages\tensorflow\contrib\lite\toco\python\tensorflow_wrap_toco.py", line 28, in \r\n _tensorflow_wrap_toco = swig_import_helper()\r\n File "c:\programdata\anaconda3\lib\site-packages\tensorflow\contrib\lite\toco\python\tensorflow_wrap_toco.py", line 20, in swig_import_helper\r\n import _tensorflow_wrap_toco\r\nModuleNotFoundError: No module named '_tensorflow_wrap_toco'\r\n'
None

@ymodak ymodak added the comp:lite TF Lite related issues label Oct 2, 2018
@ymodak
Copy link
Contributor

ymodak commented Oct 2, 2018

Please refer this for exporting a tf.keras file into tensorflow lite

@ymodak ymodak added the stat:awaiting response Status - Awaiting response from author label Oct 2, 2018
@B-C-WANG
Copy link
Author

B-C-WANG commented Oct 3, 2018

@ymodak I've run the code in tf.keras file into tensorflow lite, but get the same error.
qq 20181003100649

@ymodak
Copy link
Contributor

ymodak commented Oct 3, 2018

Thanks for the information.
Can you please try running your model with the tf-nightly build (pip install tf-nightly) instead of 1.11 and see if you get the same results?

@B-C-WANG
Copy link
Author

B-C-WANG commented Oct 3, 2018

@ymodak I've tried tf-nightly and tf-nightly_gpu, but it didn't work, got same error.

@ymodak ymodak assigned gargn and unassigned ymodak Oct 3, 2018
@gargn
Copy link

gargn commented Oct 3, 2018

I haven't been able to replicate the results using virtualenv and pip on Python 3.6.3. I ran the following steps to run the code and the tf.keras file into tensorflow lite example ran without error:

virtualenv -p python3 venv-tf-nightly
source venv-tf-nightly/bin/activate
pip install tf-nightly

When I run pip freeze it says I'm on tf-nightly==1.12.0.dev20180929.

Can you try running your code in a virtual environment with a fresh installation of TensorFlow (preferably through pip). It seems like there might be some issues with your installation. Otherwise, can you provide reproducible steps outlining how you installed your version of TensorFlow.

@ymodak
Copy link
Contributor

ymodak commented Oct 3, 2018

+1. I was able to build successfully using tf-nightly too.

@ymodak ymodak self-assigned this Oct 3, 2018
@B-C-WANG
Copy link
Author

B-C-WANG commented Oct 4, 2018

I created a completely new python env from Pycharm and run "pip install tf-nightly", then got the same error.
So I tried to use tf.lite on Ubuntu 18.04, it succeed.
When I open "...site-packages/tensorflow/contrib/lite/toco/python" on Ubuntu, I found a "_tensorflow_wrap_toco.so", but that file doesn't exist on my Win 10.
I don't know if I haven't compile that file, or is it just not support for Windows?
Anyway, I avoided that bug by using linux.

@ywang4
Copy link

ywang4 commented Oct 4, 2018

Same issue on windows 10

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label Oct 4, 2018
@Noltibus
Copy link

I have also the same issue on Windows 10. I set up a new virtualenv and installed tensorflow nightly with pip install tf-nightly. The code I ran to convert my model is:

import tensorflow as tf

graph_def_file = "graph_optimized.pb"
input_arrays = ["Placeholder"]
output_arrays = ["output"]

converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph(
    graph_def_file, input_arrays, output_arrays, input_shapes={"Placeholder" : [1, 227, 227, 3]})
tflite_model = converter.convert()
open("save_path/converted_model.tflite", "wb").write(tflite_model)

I get the same error with:
import _tensorflow_wrap_tocoModuleNotFoundError: No module named '_tensorflow_wrap_toco'

@ymodak
Copy link
Contributor

ymodak commented Oct 11, 2018

@Noltibus Thanks for opening a new issue and expressing your problem. I will close this issue so that we can focus on the newly created one #22897 .

@ymodak ymodak closed this as completed Oct 11, 2018
@cjr0106
Copy link

cjr0106 commented Oct 12, 2018

i have the same problem ,when i run in Unbuntu
Traceback (most recent call last): File "quant.py", line 17, in <module> mobilenet_tflite_file.write_bytes(converter.convert()) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/lite.py", line 464, in convert **converter_kwargs) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/convert.py", line 317, in toco_convert_graph_def input_data.SerializeToString()) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/lite/python/convert.py", line 135, in toco_convert_protos (stdout, stderr)) RuntimeError: TOCO failed see console for info. b'2018-10-12 13:08:32.732548: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1109] Converting unsupported operation: TFLite_Detection_PostProcess\n2018-10-12 13:08:32.738803: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1182] Unable to determine output type for op: TFLite_Detection_PostProcess\n2018-10-12 13:08:32.783094: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 900 operators, 1352 arrays (0 quantized)\n2018-10-12 13:08:32.844203: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 900 operators, 1352 arrays (0 quantized)\n2018-10-12 13:08:33.944841: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 111 operators, 220 arrays (0 quantized)\n2018-10-12 13:08:33.948440: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 111 operators, 220 arrays (0 quantized)\n2018-10-12 13:08:33.953645: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 11520000 bytes, theoretical optimal value: 11520000 bytes.\n2018-10-12 13:08:33.954096: I tensorflow/contrib/lite/toco/toco_tooling.cc:397] Estimated count of arithmetic ops: 2.49483 billion (note that a multiply-add is counted as 2 ops).\n2018-10-12 13:08:33.954530: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954549: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954557: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954565: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954572: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954579: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954588: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954597: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954606: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954615: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954624: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954633: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954641: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954650: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954659: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954668: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_8_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954676: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_8_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954685: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_9_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954694: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_9_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954703: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954712: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954721: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954730: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954739: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_12_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954747: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_12_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954756: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_13_depthwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954765: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_13_pointwise/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954774: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_2_1x1_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954783: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_2_3x3_s2_512/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954792: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_3_1x1_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954801: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_3_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954810: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_4_1x1_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954819: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_4_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954828: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_5_1x1_64/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954836: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954845: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_0/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954854: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_0/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954863: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_1/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954872: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_1/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954881: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_2/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954890: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_2/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954899: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_3/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954906: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_3/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954913: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_4/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954921: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_4/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954930: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_5/BoxEncodingPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954939: W tensorflow/contrib/lite/toco/tflite/export.cc:423] FAKE_QUANT operation {FakeQuant operator with output BoxPredictor_5/ClassPredictor/act_quant/FakeQuantWithMinMaxVars} was not converted. If running quantized make sure you are passing --inference_type=QUANTIZED_UINT8 and values for --std_values and --mean_values.\n2018-10-12 13:08:33.954950: F tensorflow/contrib/lite/toco/tflite/export.cc:460] Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.contrib.lite.TFLiteConverter(). Here is a list of operators for which you will need custom implementations: TFLite_Detection_PostProcess.\nAborted (core dumped)\n' None

@gargn
Copy link

gargn commented Oct 15, 2018

@cjr0106: Please file a new issue with reproducible instructions.

@homandiy
Copy link

You forget to install tf_nightly. I am using windows 10. In CMD, please input this:
pip3 install tf_nightly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues
Projects
None yet
Development

No branches or pull requests

8 participants