Update Intel's Inference Engine deep learning backend#11587
Update Intel's Inference Engine deep learning backend#11587vpisarev merged 3 commits intoopencv:3.4from
Conversation
|
@dkurt , Hi, could you share the test method to get the result in the table ? |
|
@pengli, This is a summary of https://github.com/opencv/opencv/blob/3.4/modules/dnn/perf/perf_net.cpp performance tests. You may build OpenCV with To download models, run https://github.com/opencv/opencv_extra/blob/3.4/testdata/dnn/download_models.py and export the following paths to environment: Instructions of how to enable Inference Engine backend at OpenCV: https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend Please note that resulting table is formatted in Github's Markdown syntax. Two models MobileNet_v1_SSD_TensorFlow and MobileNet_v2_SSD_TensorFlow are not represented in tests (I tested them replacing paths at |
| set(INF_ENGINE_LIBRARIES "") | ||
|
|
||
| set(ie_lib_list inference_engine) | ||
| set(ie_lib_list inference_engine cpu_extension) |
There was a problem hiding this comment.
Do we really need to link with CPU extension directly?
|
Removed cpu_extension dependency which means we still split networks on multiple Inference Engine subgraphs by unsupported layers. By default, it gives efficiency gaps (see OpenFace, YOLOv3). In case of several Inference Engine graphs on CPU we need to set In average,
|
|
Added OpenCL target tests for YOLOv3 model
|
|
@dkurt, is it ready to be merged? I do not have any objections from my side |
|
@vpisarev, Maybe the only thing is that I didn't find a way to disable active threads waiting despite an environment variable |
|
@dkurt, it looks like there is no standard way to do it :( https://stackoverflow.com/questions/32970102/how-to-control-global-openmp-settings-from-c-c. Let's merge your patch in 👍 |
This pullrequest changes
DNN_TARGET_MYRIAD)Measured efficiency in milliseconds (median times):
CV CPU:
DNN_BACKEND_DEFAULT,DNN_TARGET_CPUCV GPU, fp32:
DNN_BACKEND_DEFAULT,DNN_TARGET_OPENCLCV GPU, fp16:
DNN_BACKEND_DEFAULT,DNN_TARGET_OPENCL_FP16IE CPU:
DNN_BACKEND_INFERENCE_ENGINE,DNN_TARGET_CPUIE GPU, fp32:
DNN_BACKEND_INFERENCE_ENGINE,DNN_TARGET_OPENCLIE GPU, fp16:
DNN_BACKEND_INFERENCE_ENGINE,DNN_TARGET_OPENCL_FP16IE NCS:
DNN_BACKEND_INFERENCE_ENGINE,DNN_TARGET_MYRIADCPU: Intel® Core™ i7-6700K CPU @ 4.00GHz x 8
GPU: Intel® HD Graphics 530 (Skylake GT2)
NCS: Intel® Movidius™ Neural Compute Stick