Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov8 inference #96

Open
jhxiang opened this issue Apr 21, 2023 · 3 comments
Open

yolov8 inference #96

jhxiang opened this issue Apr 21, 2023 · 3 comments

Comments

@jhxiang
Copy link

jhxiang commented Apr 21, 2023

请问一下为什么yolov8 end2end的推理,warmup和没有warmup时间相差这么多吗?没有warmup大概700ms,有warmup只有6ms,Infer函数不都是执行的图片放入gpu,然后推理结果从gpu放到内存吗?难道是warmup之后,图片不需要从内存到gpu了吗?
另外,如果我是处理视频,需要实时从视频帧读取到gpu,这应该是没办法用warmup吧?这应该怎么处理?

@Yuanlin-Zhao
Copy link

哥们给个联系方式我有一些问题想咨询你

@jhxiang
Copy link
Author

jhxiang commented May 8, 2023

我主页有

@Linaom1214
Copy link
Owner

请问一下为什么yolov8 end2end的推理,warmup和没有warmup时间相差这么多吗?没有warmup大概700ms,有warmup只有6ms,Infer函数不都是执行的图片放入gpu,然后推理结果从gpu放到内存吗?难道是warmup之后,图片不需要从内存到gpu了吗? 另外,如果我是处理视频,需要实时从视频帧读取到gpu,这应该是没办法用warmup吧?这应该怎么处理?

模型load以后 可以warmup, 使用空数据infer几次 然后 业务流来了以后直接推理

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants