You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for providing the source code. I deployed M-LSD_320_tiny_fp16.tflite model on i7 Windows pc with 12 CPUs using Tensorflow Lite C api but the inference speed is only 4fps. The paper claims tiny model obtains real-time performance between 30.7fps ~ 56.8fps on iPhone (A14 Bionic chipset) and Android phone (Snapdragon 865 chipset). I thought it will perform better on i7 intel cpu. Could you explain how it perform poorly on pc? Did you use additional optimization when you deploy it on smartphone?
The text was updated successfully, but these errors were encountered:
solved the problem by converting tflite model into onnx and run it on onnxruntime Cxx which obtains similar results to ones mentioned on the paper. Thanks
Thank you for providing the source code. I deployed M-LSD_320_tiny_fp16.tflite model on i7 Windows pc with 12 CPUs using Tensorflow Lite C api but the inference speed is only 4fps. The paper claims tiny model obtains real-time performance between 30.7fps ~ 56.8fps on iPhone (A14 Bionic chipset) and Android phone (Snapdragon 865 chipset). I thought it will perform better on i7 intel cpu. Could you explain how it perform poorly on pc? Did you use additional optimization when you deploy it on smartphone?
The text was updated successfully, but these errors were encountered: