New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFLiteConverter.from_saved_model - batchNorm is not supported? #23627
Comments
I have the same error, when I use
Any ideas or suggestions would be greatly appreciated. .Thanks. |
@milinddeore Are you trying to convert a graph that performs model training? We can typically only convert graphs that perform eval. Can you post your code that builds the graph? |
@srjoglekar246 This is FaceNet model, the source code is here, and i modified train_softmax.py for I tried converting to Here is the other thread, where similar issue is seen. and |
I could able to solve it, please see check on #19431 |
@milinddeore I think this makes sense. Sorry I didn't get to your issue on time, but it seems like the error occurred because you were earlier trying to convert the part of the graph that was doing the training? In this attempt, you essentially add an input/output interface to the saved graph and use just that with the converter. Am I correct? Thanks for resolving this :-). Can we close the bug? |
@srjoglekar246 Thats correct! I have a question here, is there a document where we can see all the support ops on mobile or |
Thanks for confirming! Closing this one... |
@milinddeore The Compatibility Guide should be a good resource for most ops. However, there is a slight chance it might be outdated, in that case you can look into our kernels directory. |
Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template
System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No, i have tried tensorflow example code snippet.
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab (Linux bedbba52137a 4.14.65+ Add support for Python 3.x #1 SMP Sun Sep 9 02:18:33 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux)
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: None
TensorFlow installed from (source or binary): pip on google colab, using following command
Python version: Python 3.6.6
Bazel version (if compiling from source): None
GCC/Compiler version (if compiling from source): None
CUDA/cuDNN version: using command 'nvcc --version'
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148
GPU model and memory:
You can collect some of this information using our environment capture script
You can also obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Describe the current behavior
I have a model here, which is exported SavedModel using following code:
When i convert this SavedModel to TFLite it give me error, the code snippet is as:
Following are error logs:
Updates on 10-Nov-2018
I have to give input_shape as dictionary in the following way:
This fixed the earlier error but now i see a different error and the logs are below:
Describe the expected behavior
It should create *.lite file instead.
Code to reproduce the issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
I have given the SavedModel and above code snippet to reproduce it.
Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: