-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does the tvm based CPU get accelerated? #54
Comments
Hi @jinfagang , Thanks for your attention here! At that time, my first plan is to let each side could compile and infer successfully, and because of time limited, I have not carefully compared the time of each side between Now, I'm writing and refactoring the BTW, we are welcome for contributions in any ways. |
And I've uploaded the notebooks of This repo is my first glance on |
@zhiqwang That's weried, tvm should be faster if compares on same CPU device? at least should faster than onnxruntime if chosen CPU as provider. |
Hi @jinfagang , I agree with you on this point, and this is my goal. The current realization on the |
Hi @jinfagang I've added a rough comparison of inference time consumed on Jupyter notebook (iPython).
You could check the latest updated notebook for more details. BTW, the displayed time of the onnxruntime notebook is on GPU, I just test it locally. Although this comparison is a bit rough, we could come to this conclusion that So I'll close this issue. If you have more concerns please let me know. |
Hi, just wonder tvm based deployed model can get accelerated or not?
Compare with vanilla pytorch CPU or onnxruntime or OpenVINO?
The text was updated successfully, but these errors were encountered: