New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to new OpenVINO API after 2022.1 release #22957
Conversation
7b02fee
to
0c253d0
Compare
@dkurt Thank you for the update! CI failures are related to OOM issue (32Gb).
consumes 17Gb with OpenVINO 2022.1 and enabled OpenCL. Could you have a chance to take a look on this? |
There must be a leak. Can reproduce with a single test on both CPU and GPU:
Update: found two problems: asynchronous callback and cycled dependency between |
47bd8d5
to
3296679
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dkurt Thank you 👍
Need to fix failed OCL tests with 2021.4, thanks for configuring. @alalek, problem is reproduced on 4.x branch. Appears on |
…Net. Add InferRequest callback only for async inference. Do not capture InferRequest object.
b209c8d
to
4f3e8d8
Compare
4f3e8d8
to
1c56361
Compare
@alalek is it too soon to try compiling with newly released Openvino? |
Switch to new OpenVINO API after 2022.1 release * Pass Layer_Test_Convolution_DLDT.Accuracy/0 test * Pass test Test_Caffe_layers.Softmax * Failed 136 tests * Fix Concat. Failed 120 tests * Custom nGraph ops. 19 failed tests * Set and get properties from Core * Read model from buffer * Change MaxPooling layer output names. Restore reshape * Cosmetic changes * Cosmetic changes * Override getOutputsInfo * Fixes for OpenVINO < 2022.1 * Async inference for 2021.4 and less * Compile model with config * Fix serialize for 2022.1 * Asynchronous inference with 2022.1 * Handle 1d outputs * Work with model with dynamic output shape * Fixes with 1d output for old API * Control outputs by nGraph function for all OpenVINO versions * Refer inputs in PrePostProcessor by indices * Fix cycled dependency between InfEngineNgraphNode and InfEngineNgraphNet. Add InferRequest callback only for async inference. Do not capture InferRequest object. * Fix tests thresholds * Fix HETERO:GPU,CPU plugin issues with unsupported layer
InferenceEngine::DataPtr
completelyPull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.