Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError while converting gpytorch model to ONNX #2443

Open
supersjgk opened this issue Nov 20, 2023 · 1 comment
Open

ValueError while converting gpytorch model to ONNX #2443

supersjgk opened this issue Nov 20, 2023 · 1 comment
Labels

Comments

@supersjgk
Copy link

馃悰 ValueError while converting gpytorch model to ONNX

I followed this
Converting Exact GP Models to TorchScript. Everything is same. The only difference is that instead of using torch.jit.trace, I used torch.onnx.export:

To reproduce

with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.trace_mode():
    model.eval()
    test_x = torch.randn(100)
    pred = model(test_x)
    wrapper = MeanVarModelWrapper(model)
    torch.onnx.export(wrapper, test_x, 'gp.onnx',verbose=True) #this line gives error

** Stack trace/error message **

/usr/local/lib/python3.10/dist-packages/gpytorch/models/exact_prediction_strategies.py:280: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if joint_covar.size(-1) <= settings.max_eager_kernel_size.value():
/usr/local/lib/python3.10/dist-packages/gpytorch/kernels/kernel.py:502: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not x1_.size(-1) == x2_.size(-1):
/usr/local/lib/python3.10/dist-packages/gpytorch/lazy/lazy_evaluated_kernel_tensor.py:366: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if res.shape != self.shape:
/usr/local/lib/python3.10/dist-packages/linear_operator/operators/_linear_operator.py:1409: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  elif not self.is_square:
/usr/local/lib/python3.10/dist-packages/gpytorch/distributions/multivariate_normal.py:318: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if variance.lt(min_variance).any():
/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/jit_utils.py:307: UserWarning: Constant folding in symbolic shape inference fails: zero-dimensional tensor (at position 0) cannot be concatenated (Triggered internally at ../torch/csrc/jit/passes/onnx/shape_type_inference.cpp:439.)
  _C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-14-4f8da2da46ab>](https://localhost:8080/#) in <cell line: 1>()
      4     pred = model(test_x)
      5     wrapper = MeanVarModelWrapper(model)
----> 6     torch.onnx.export(wrapper, test_x, 'gp.onnx',verbose=True)

6 frames
[/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py](https://localhost:8080/#) in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, custom_opsets, export_modules_as_functions, autograd_inlining)
    514     """
    515 
--> 516     _export(
    517         model,
    518         args,

[/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py](https://localhost:8080/#) in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, onnx_shape_inference, export_modules_as_functions, autograd_inlining)
   1594             _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
   1595 
-> 1596             graph, params_dict, torch_out = _model_to_graph(
   1597                 model,
   1598                 args,

[/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py](https://localhost:8080/#) in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, dynamic_axes)
   1137 
   1138     try:
-> 1139         graph = _optimize_graph(
   1140             graph,
   1141             operator_export_type,

[/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py](https://localhost:8080/#) in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict, dynamic_axes, input_names, module)
    675     _C._jit_pass_onnx_lint(graph)
    676 
--> 677     graph = _C._jit_pass_onnx(graph, operator_export_type)
    678     _C._jit_pass_onnx_lint(graph)
    679     _C._jit_pass_lint(graph)

[/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py](https://localhost:8080/#) in _run_symbolic_function(graph, block, node, inputs, env, operator_export_type)
   1938                     k: symbolic_helper._node_get(node, k) for k in node.attributeNames()
   1939                 }
-> 1940                 return symbolic_fn(graph_context, *inputs, **attrs)
   1941 
   1942         attrs = {

[/usr/local/lib/python3.10/dist-packages/torch/onnx/symbolic_helper.py](https://localhost:8080/#) in wrapper(g, *args, **kwargs)
    304                     f"{FILE_BUG_MSG}"
    305                 )
--> 306             return fn(g, *args, **kwargs)
    307 
    308         return wrapper

[/usr/local/lib/python3.10/dist-packages/torch/onnx/symbolic_opset13.py](https://localhost:8080/#) in diagonal(g, self, offset, dim1, dim2)
    772     if rank is not None:
    773         axes = list(range(rank))
--> 774         axes.remove(dim1)
    775         axes.remove(dim2)
    776         self = g.op("Transpose", self, perm_i=axes + [dim1, dim2])

ValueError: list.remove(x): x not in list

How do I convert gpytorch model to ONNX. Please help if anyone has successfully done so.

System information

Please complete the following information:

  • GPyTorch Version - 1.11
  • PyTorch Version - 2.1.0+cu118
@supersjgk supersjgk added the bug label Nov 20, 2023
@supersjgk
Copy link
Author

@gpleiss @jacobrgardner @Balandat Please help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant