You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 16, 2023. It is now read-only.
We are trying to run models from silero-models via onnx.js.
We have had various issues with ONNX export, but most of them finally were resolved using built-in messages and just testing using onnxruntime (we had to heavily simplify some model parts).
Model Export
The models were ported as follows:
Original model in PyTorch =>
Fusing convolutions (w/o quantization) =>
Simplified model in TorchScript =>
ONNX (=> TensorFlow via onnx-tensorflow)
Model export script in PyTorch (step 4 above)
torch.onnx.export(onnx_model, # model being run
inputs, # model input (or a tuple for multiple inputs)
"en_v1_test.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=12, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0: 'batch',
1: 'samples'},
'output' : {0: 'batch',
1: 'frames'}},
verbose=True
)
We have tested that the converted models work fine with:
onnxruntime
TensorFlow (on CPU and GPU)
Problem
But when we try to run the model in onnx-js, we face an issue similar to #168 .
Upon closer inspection via netron app we see that:
slicing operators (static and dynamic alike) are stored as int64 values
some constants like 0, 1, -1 are stored as int64 values
If we then inspect PyTorch export log, we can see that it has Long() = 228 times, so I believe it is not some idiosyncrasy of our models (except for the first normalization and STFT we mostly use off-the-shelf components) but a feature of ONNX export in general.
Looks like int64 storage of some constants is an artefact of ONNX and PyTorch export.
I have seen that in other ONNX-related projects this issue was widely addressed.
Can this be done here as well?
Or does anyone have a recipe to export models differently?
Alternative Solutions
Transform to TensorFlow, then to tf.js.
Looks like an inferior option, just because you have to transform 3 times instead of one.
I will also raise a similar issue on PyTorch forums.
The text was updated successfully, but these errors were encountered:
Hi,
Background
We are trying to run models from silero-models via onnx.js.
We have had various issues with ONNX export, but most of them finally were resolved using built-in messages and just testing using
onnxruntime
(we had to heavily simplify some model parts).Model Export
The models were ported as follows:
onnx-tensorflow
)Model export script in PyTorch (step 4 above)
We have tested that the converted models work fine with:
Problem
But when we try to run the model in onnx-js, we face an issue similar to #168 .
Upon closer inspection via netron app we see that:
If we then inspect PyTorch export log, we can see that it has
Long() =
228 times, so I believe it is not some idiosyncrasy of our models (except for the first normalization and STFT we mostly use off-the-shelf components) but a feature of ONNX export in general.Model conversion log (very verbose!)
Question
Looks like int64 storage of some constants is an artefact of ONNX and PyTorch export.
I have seen that in other ONNX-related projects this issue was widely addressed.
Can this be done here as well?
Or does anyone have a recipe to export models differently?
Alternative Solutions
Transform to TensorFlow, then to
tf.js
.Looks like an inferior option, just because you have to transform 3 times instead of one.
I will also raise a similar issue on PyTorch forums.
The text was updated successfully, but these errors were encountered: