You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I tried to run this model on iOS (using core ml) works fine, but when I did it for Android with onnx, the inference time went high to 180ms, whereas it runs around 20-30 on ios.
I tired to look into onnx export and found there are some issues.
Has anyone faced the same and how to fix this?
The text was updated successfully, but these errors were encountered:
@mayhemantt I had issues with the instancenorm when I exported to ONNX, I solved them by using onnx-simplifier, which allowed me to build an engine using tensorrt.
Hi, I tried to run this model on iOS (using core ml) works fine, but when I did it for Android with onnx, the inference time went high to 180ms, whereas it runs around 20-30 on ios.
I tired to look into onnx export and found there are some issues.
Has anyone faced the same and how to fix this?
The text was updated successfully, but these errors were encountered: