New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how can i use onnx library if my model has more than one input? Please advice what will be the command for that? #2711
Comments
What would you like to do with your model? If you're looking to run your model, you will want to use one of the inference frameworks which support ONNX. For example, with ONNX Runtime: import onnxruntime as ort
import numpy as np
ort_session = ort.InferenceSession('path/to/your/model.onnx')
outputs = ort_session.run(None, {
"input_1": np.random.randn(10, 3, 224, 224).astype(np.float32)
"input_2": np.random.randn(10, 3, 100).astype(np.float32)
})
print(outputs[0]) |
I want to convert pytorch model to onnx model first. But my pytorch model has multiple inputs. How should i proceed? |
PyTorch supports exporting to ONNX via their TorchScript or tracing process. You can read their documentation here. To export multiple a model with multiple inputs, you want to take a look at the documentation for the onnx.export function.
So you need to provide the inputs in a tuple, similarly to how you would pass the inputs to the model when running in PyTorch. If we pretend we had a model with multiple inputs, matching the ONNX model I gave example code for above, it would look like this: model = TwoInputModel()
dummy_input_1 = torch.randn(10, 3, 224, 224)
dummy_input_2 = torch.randn(10, 3, 100)
# This is how we would call the PyTorch model
example_output = model(dummy_input_1, dummy_input_2)
# This is how to export it with multiple inputs
torch.onnx.export(model,
args=(dummy_input_1, dummy_input_2),
f="alexnet.onnx",
input_names=["input_1", "input_2"],
output_names=["output1"]) You can see another example here. There is some indirection in that code, so it might look more complicated than it is, but it might help you understand dynamic axes etc. |
Thank you for your help. I did the same as mentioned in below code: Code: Load the trained model from filemodel_path="/home/Models/rcnn_local/models_stereo/stereo_rcnn_12_6477.pth" stereoRCNN = resnet(('background', 'Car'), 101, pretrained=True) Export the trained model to ONNXdummy_input_1 = torch.randn(1, 3, 600, 1986).cuda() torch.onnx.export(stereoRCNN.cuda(), Error: File "/home/anaconda3/envs/env_stereo/lib/python3.6/site-packages/torch/nn/modules/module.py", line 492, in call Please advise. |
I have replied on #2679; I think this issue can be closed now @prasanthpul 🙂 |
Add another Q. How about W/ multi output??? |
input using tensorflow #path to your model #Preprocess the image session=onnxruntime.InferenceSession(model,None) np.array(input22).flatten() prediction=session.run([],{input1:input11,input2:input22,input3:input33}) |
Only vaguely related, but in case someone with memory issues during inference stumbles upon this thread: I encountered strange behaviour (a bug?) with ONNX models using multiple inputs (i.e. outputs = session.run(None, { inp_name1: x1, inp_name2: x2,..}). When running inference I had GPU OOM errors at way, way smaller batch sizes than expected. This was fixed by changing my model so that it only had one (larger) array as input (from which I then extracted x1,x2... inside the model). I'm using onnx 1.12.0, tf2onnx 1.14.0 and opset 17. |
No description provided.
The text was updated successfully, but these errors were encountered: