Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnxruntime CPU加速 #3759

Closed
chocolate-byte opened this issue Aug 20, 2021 · 0 comments
Closed

onnxruntime CPU加速 #3759

chocolate-byte opened this issue Aug 20, 2021 · 0 comments
Assignees

Comments

@chocolate-byte
Copy link

我训练过一个类似的mobilenet模型,但是一帧400x300图像cpu速度在100ms左右,想问一下有人知道想做到类似引用中这种速度有什么参考的方法吗,onnx和多线程模型加速确实会有效果但是应该没有这么快到7ms的吧

对比过 onnxruntime 和 pytorch ,ORT 的 CPU 推理速度有明显的优势,以 crnn 为例,32*128 尺寸的输入,7ms vs 40 ms,具体耗时还要看你的 backbone 规模和 cpu 主频了,加钱换个主频高点的 CPU....或者 check 下现在的 cpu 是不是支持 AVX512 的,paddle 对 AVX512 有没有优化

好的,您好,您这里能提供一下示例代码嘛?因为我之前按照官方的方式去做的测试

可以参考一下RapidOCR

Originally posted by @SWHL in #2950 (comment)

@chocolate-byte chocolate-byte changed the title onnxruntime CPU速度 onnxruntime CPU加速 Aug 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants