Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX export hangs #32

Closed
genolve opened this issue Jul 25, 2022 · 1 comment
Closed

ONNX export hangs #32

genolve opened this issue Jul 25, 2022 · 1 comment

Comments

@genolve
Copy link

genolve commented Jul 25, 2022

I’m trying to export the model to ONNX and the export simply hangs. Before diving into debugging, wanted check if anyone has had success with an ONNX export.

I have torch version: 1.13.0.dev20220616

Export command:

import torch.utils.model_zoo as model_zoo
import torch.onnx
import netron
device =  torch.device('cpu')
torch.onnx.enable_log() 
torch.set_num_threads(1)
#The tuple should contain model inputs such that model(*args) is a valid invocation of the model
tinput=(torch.tensor(firstbatch['input_ids']).to(device)
       )
model.cpu()
model.eval()
torch.onnx.export(model,                     # model being run  debug: export_to_pretty_string
                  tinput,                    # model input (or a tuple for multiple inputs)
                  dataset+"_routing.onnx",   # where to save the model (can be a file or file-like object)
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=16,          # the ONNX version to export the model to
                  do_constant_folding=True,  # whether to execute constant folding for optimization
                  input_names  = ['input_ids'],   # the model's input names
                  output_names = ['output'],  # the model's output names
                  verbose=True
                  #,operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK
                 )
@genolve
Copy link
Author

genolve commented Dec 12, 2022

I ended up switching to x-transformer which exports to ONNX with minimal modifications. It also hangs on the first export attempt; just interrupt the kernel and the second export works.

@genolve genolve closed this as completed Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant