Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some test about the inference time #12

Open
ccsvd opened this issue Apr 20, 2021 · 5 comments
Open

some test about the inference time #12

ccsvd opened this issue Apr 20, 2021 · 5 comments

Comments

@ccsvd
Copy link

ccsvd commented Apr 20, 2021

hi, @ycszen ,
i test the native-hrnet and lite-hrnet mnn model on my pc,
although theirs flops is 309M vs 203M,but their inference time is almost the same.
i thank is the reason lite-hrnet has more memory read or write than native-hrnet when inference.
is that right?
here is some info:
parameters infer memory(MB) MAdd Flops MemRead(B) MemWrite(B) duration
native-hrnet:
未命名1618905887
lite-hrnet :
未命名1618905982

@YaqiLYU
Copy link

YaqiLYU commented Apr 21, 2021

@ccsvd Hi,

  1. From the info you provided, the Flops, MemRead(B) and MemWrite(B) of native-hrnet are all larger than lite-hrnet, but you said "lite-hrnet has more memory read or write than native-hrnet when inference". Is there some mistake?
  2. the inference time of lite-hrnet is almost 2.5 times than native-hrnet, but you said "their inference time is almost the same". Is there some mistake?

@clover978
Copy link

clover978 commented Apr 25, 2021

I did similar experments. The conclusion is consistent.

I use onnxruntime to profile exported two onnx files on rtx3090.
Inference code is exactly the same except for the onnx files to be tested.

for hrnet_w32_384x288, avg infer time is 11.7ms
for lite_hrnet_30_384x288, avg infer time is 35.3ms

@tucachmo2202
Copy link

Hi @clover978, @ccsvd
Could you give me file to inference an input image. I am not familiar with mmpose so I don't know how to use. Thanks!

@fire717
Copy link

fire717 commented Jul 14, 2021

so why lite-hrnet cost more time than hrnet? then why we need lite-hrnet? @clover978

@clover978
Copy link

so why lite-hrnet cost more time than hrnet? then why we need lite-hrnet? @clover978

Maybe lite-hrnet result in better efficiency on specific hardware (e.g. CPU, FPGA),since it does have smaller FLOPS in theory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants