-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Respect Keras layer names for output operations in Concrete Function #60289
Comments
@DLumi |
I really don't see how it is the Keras issue, since it seems like a manifestation of default behavior for calling |
Hi @DLumi, As the issue is already on Keras repo can we close it here so that it can be better tracked at single repo. Thanks! |
Honestly, I'd keep both issues open in case there are some tweaks required from the TF team. Unless the Keras team explicitly says they will handle this themselves, that is. |
@SuryanarayanaY as expected, I got bounced back to you guys. |
Click to expand!
Issue Type
Feature Request
Have you reproduced the bug with TF nightly?
Yes
Source
binary
Tensorflow Version
2.12.0
Custom Code
Yes
OS Platform and Distribution
No response
Mobile device
No response
Python version
3.8.10
Bazel version
No response
GCC/Compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current Behaviour?
When converting a Keras model to concrete function, you can preserve the input name by creating a named TensorSpec, but the outputs are always created for you by just slapping
tf.identity
on top of whatever you had there, even if it was a custom namedtf.identity
operation. Since many converters rely on concrete functions to make their own representation (TFLite, ONNX, CoreML, etc), this behavior messes up the output operation names, often making them inconsistent with each other.There's currently no workaround for that. You can access previous graph nodes by calling a layer named like {model_name}/{output_layer_name} when doing inference on frozen graph itself, but it won't help you in any way to convert the model.
So I'd be happy to see one of those things as a solution to that:
Standalone code to reproduce the issue
Relevant log output
The text was updated successfully, but these errors were encountered: