-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some test about the inference time #12
Comments
@ccsvd Hi,
|
I did similar experments. The conclusion is consistent. I use onnxruntime to profile exported two onnx files on rtx3090. for hrnet_w32_384x288, avg infer time is 11.7ms |
Hi @clover978, @ccsvd |
so why lite-hrnet cost more time than hrnet? then why we need lite-hrnet? @clover978 |
Maybe lite-hrnet result in better efficiency on specific hardware (e.g. CPU, FPGA),since it does have smaller FLOPS in theory. |
hi, @ycszen ,
i test the native-hrnet and lite-hrnet mnn model on my pc,
although theirs flops is 309M vs 203M,but their inference time is almost the same.
i thank is the reason lite-hrnet has more memory read or write than native-hrnet when inference.
is that right?
here is some info:
parameters infer memory(MB) MAdd Flops MemRead(B) MemWrite(B) duration
native-hrnet:
lite-hrnet :
The text was updated successfully, but these errors were encountered: