-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About arm platform (ncnn model) #3
Comments
It is caused by the empty of deconv layer weight, Caffe will init new weight, but NCNN not. |
Thanks a lot, I will provide it! |
@Charrin qcom835 640*480 VGA(only inference)
|
Thank you! I update your test result into my README |
@hanson-young hi, very appreciated for your work! model graph is not optimal i think, thus you can try this ~ |
I have tried, it speeds up 10% on qcom 625 |
@nihui 感谢nihui大佬,问题已经解决了,cmake3.9.2在用ndk编译的时候调用openmp会出问题,我降低版本到3.5.1就好了https://gitlab.kitware.com/cmake/cmake/issues/17351
|
@hanson-young have you compare the speed of arm platform inference for Retinaface vs. MTCNN model? |
@pineking It’s hard to say, related to specific platforms and uses |
@hanson-young my test time on 835 is about 20 ms slower than your inference time,could you share your NCNN lib and include files for android?thank you very much!! |
@hanjw123 I compiled it on May 29, but you can get ncnn lib from here. https://github.com/Tencent/ncnn/releases. |
|
@hanson-young my inference result is wrong..what's your ndk version and ANDROID_PLATFORM version? |
@hanjw123 HI,I also test the speed of retinaface model, would you like to discuss this together? |
@pineking I run it on arm arch64,not android application |
测试了一下caffe的mnet模型在树莓派4B上的速度,推理框架用的阿里的MNN,树莓派4B的cpu型号是BCM2711(四核Cortex A72,主频1.5GHz),测试分辨率为VGA (640*480),loop10次取平均:
|
Do you have any plans to transfer to ncnn on arm platform? I failed to convert ncnn with caffe model which you provide.
The text was updated successfully, but these errors were encountered: