Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new output after Conv layer #7

Closed
tomoporo opened this issue Jun 28, 2022 · 7 comments
Closed

new output after Conv layer #7

tomoporo opened this issue Jun 28, 2022 · 7 comments

Comments

@tomoporo
Copy link

tomoporo commented Jun 28, 2022

Is it possible to add new "out_put layer", for instance, output after a convolution layer.
Sometimes, we might want to check feature maps from the convolution layer as for image recognition.

I'd like to get such output. I found such way for Pytorch and Keras/Tensorflow.
I'd like to do the same thing using onnx model in onnxRT by cutting(?) and modifing the onnx model.

I can not find tutorial of it. If there exists such exsamples, please teach me.

@ZhangGe6
Copy link
Owner

If you want to compare the output feature of a Conv layer $L$ with another existing feature $F$, you can run the ONNX model using onnxruntime, and compare the output of layer $L$ with $F$ offline.

If I am not getting what you mean correctly, you can explain it in more detail for further discussion.

@tomoporo
Copy link
Author

tomoporo commented Jun 29, 2022

thanks for the comment. I modified my first post. What I'd like to do is that I cut the model in the middle and connect a new output layer at the cut point

e.g)
input - conv -relu - pooling - conv - relu - pooling - flatten - output
=> modify to
input - conv -relu - pooling - conv - new output

@ZhangGe6
Copy link
Owner

ZhangGe6 commented Jul 1, 2022

Sorry for the late response. Adding new model input/output nodes is not supported in the current version. However, it is indeed a necessary feature and has been added to the to-do list. The feature will come soon. Maybe you have to add the new model output node by ONNX Python API now.

Thanks for the feedback!

@ZhangGe6
Copy link
Owner

Adding new model outputs is supported now. Please see Readme for more details.

@coolmian
Copy link

coolmian commented Sep 30, 2022

Adding new model outputs is supported now. Please see Readme for more details.

The new model outputs does not support defining shape and data-type, different from the original model output format. Cannot be used as normal model.

in Python,
you can use these code to find the difference between the modified model and the original model:

import onnx
model = onnx.load('efficientnet-lite4-11-int8.onnx')
graph = model.graph
output = graph.output
print(output)

@ZhangGe6
Copy link
Owner

ZhangGe6 commented Oct 2, 2022

@coolmian Hi, the shape and data type of the added model outputs are inferred automatically and should not be defined manually.

I am not sure whether I got your point (especially when you say "the original model output format"). So feel free for more discussions if problems still exist.

@coolmian
Copy link

coolmian commented Nov 1, 2022

@coolmian Hi, the shape and data type of the added model outputs are inferred automatically and should not be defined manually.

I am not sure whether I got your point (especially when you say "the original model output format"). So feel free for more discussions if problems still exist.

import onnxruntime as rt
sess = rt.InferenceSession("efficientnet-lite4-11-int8.onnx")
results = sess.run(['efficientnet-lite4/model/head/Squeeze:0_quantized'], {"images:0": img_batch})

Thank you for your attention.

But can you provide a sample code for using the model after adding the output node? If I do not define the output format manually, these codes will report an error "onnxruntime. capi. onnxruntime _pybind11_state. InvalidArgument: [ONNXRuntimeError]: 2: INVALID_ARGUMENT: Invalid Output Name: efficientnet-lite4/model/head/Squeeze:0_quantized"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants