-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing intermediate input node in TF Lite convert #39276
Comments
@Wheest, |
Hi @amahendrakar, thanks for responding. Here is a Juypter notebook gist that converts first the original model ( Both |
Further in my investigation, I have tried to find at what point we lose the Thus I have got the list of nodes in the graph def during export, and checked if the node is present. In the If it is not, then why is the exporter trying to find it? However given For completeness I have shown the
|
Was able to reproduce the issue with TF v2.2 and TF-nightly. Please find the attached gist. Thanks! |
@jvishnuvardhan thanks for looking at this. The batch norm layer which fails takes as input the missing The So even in inference mode I believe this batch norm layer is needed. From my check of the working |
@Wheest |
@jvishnuvardhan so batch normalisation being removed can be discarded as a cause of the issue, since we have support for In that case, it seems that the output node I've not been able to query these SWIG objects to figure out which of them contains the node. |
I've examined the model in netron, and it does seem strange. However, the alt model carried with it additional output tensors that were used in the training process. Normally these are not used in inference. However, it seems that keeping these output tensors interfered with the export process. Removing them manually allowed the export to work. I'll see if I can make a minimum working example to reproduce this issue. |
Marking this as resolved due to inactivity. @Wheest Feel free to re-open this issue if it is still blocking you. |
System information
Command used to run the converter or code if you’re using the Python API
The output from the converter invocation
Also, please include a link to the saved model or GraphDef
Saved Model GDRIVE link
Failure details
In the graph, these are batch normalisation operations that cannot be removed, since they follow an Add operation. This part of the graph is:
This issue might make me think that Batch Norm is not supported.
However a very similar model I'm using features Add layers followed by BatchNorm, and is successfully exported.
I'm trying to figure out the source of this issue. Is there anything I should be looking at that might help me pin down the cause?
The text was updated successfully, but these errors were encountered: