-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot export saved ScriptModule models to ONNX format #14869
Comments
I'm seeing a very error message from a different place.
It seems like To reproduce your version I needed to modify your script slightly.
EnvironmentTest both on
and on
|
Bump. Can someone please take a look? Error still occurs in latest version of PyTorch. |
@singhblom Thanks for sharing. Yes, I also found an error thrown in |
Tagging reviewers of PR: @suo @houseroad @dzhulgakov |
I think is because we erase shape information when saving script module. However, we should repopulate it again when doing onnx export that invokes tracing again. So it鈥檚 a legit bug. cc @jamesr66a |
Can I please get an ETA? @dzhulgakov @jamesr66a |
@dzhulgakov : any chance for this to get fixed ? |
this issue already fixed in latest commit 30bc19d |
No longer an assert, but still errors on recent master:
|
Seems to work on PyTorch 1.9.0. Note the example_outputs is produced by actually invoking the model, so it's guaranteed to be a valid example. import torch
print(torch.__version__)
import torchvision
m = torchvision.models.resnet50()
x = torch.rand((1, 3, 224, 224))
traced_m = torch.jit.trace(m, x)
f = 'model.pt'
torch.jit.save(traced_m, f)
loaded_m = torch.jit.load(f)
torch.onnx._export(loaded_m, x, 'model.onnx', example_outputs=loaded_m(x)) Produces output:
|
馃悰 Bug
I am unable to load a PyTorch model as a ScriptModule from a file and then export to ONNX. However, I am able to export a ScriptModule in memory directly to ONNX.
To Reproduce
Steps to reproduce the behavior:
The script
yields
AssertionError: example_outputs must be provided when exporting a ScriptModule
. Replacing the last line withgive me the following error:
Expected behavior
Successfully export to ONNX format
Environment
PyTorch version: 1.0.0a0+b5db6ac
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.12.2
Python version: 3.7
Is CUDA available: No
CUDA runtime version: 9.0.176
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/local/cuda-8.0/lib64/libcudnn.so.6.0.21
/usr/local/cuda-8.0/lib64/libcudnn_static.a
/usr/local/cuda-9.0/lib64/libcudnn.so.7.3.1
/usr/local/cuda-9.0/lib64/libcudnn_static.a
/usr/local/cuda-9.1/lib64/libcudnn.so.7.0.5
/usr/local/cuda-9.1/lib64/libcudnn_static.a
/usr/local/cuda-9.2/lib64/libcudnn.so.7.3.1
/usr/local/cuda-9.2/lib64/libcudnn_static.a
Versions of relevant libraries:
[pip] numpy (1.15.4)
[pip] torch (1.0.0a0+b5db6ac)
[pip] torchvision (0.2.1)
[conda] mkl 2019.1 144
[conda] mkl-include 2019.1 144
[conda] torch 1.0.0a0+b5db6ac
[conda] torchvision 0.2.1
Additional context
The text was updated successfully, but these errors were encountered: