-
-
Notifications
You must be signed in to change notification settings - Fork 55.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
G-API: Add synchronous execution for IE backend #22588
Conversation
cv::gimpl::ie::RequestPool::RequestPool(std::vector<InferenceEngine::InferRequest>&& requests) { | ||
for (size_t i = 0; i < requests.size(); ++i) { | ||
m_requests.emplace_back( | ||
std::make_shared<AsyncInferExecutor>(std::move(requests[0]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here the question how to switch between AsyncInferExecutor
and SyncInferExecutor
. I see two options here:
- Provide to user handle:
params.cfgInferenceAPI(ParamDesc::API api) // ParamDesc::API::ASYNC by default
- Calculate it based on number of infer requests. if nireq > 1 then it must be
AsyncInferRequest
since there is no sense to infer synchronously with multiple infer requests.
I prefer the first option since it's more flexible and sometimes there is a difference between sync/async mode even with nireq == 1.
@dmatveev do you mind 1)
option?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if we have different number of nireq each time - is it possible? Voting for the first option
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, discussed locally
@smirnov-alexey Could you have a look, please? |
4939d37
to
5e6f737
Compare
@TolyaTalamanov please, add some tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reason for these changes? I see the synchronous path isn't used yet?
Do you plan to make it user-controllable? What are the benefits?
// RunF - function which is set blobs and run async inference. | ||
// CallbackF - function which is obtain output blobs and post it to output. | ||
// SetInputDataF - function which set input data. | ||
// ReadOutputDataF - function which read output data. | ||
struct Task { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there's no body (the execution callback) in Task, is it still a Task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually the body is set_input_data
and read_output_data
. I guess it still be called Task
I'd glad to have opportunity to control
|
5e6f737
to
cf5db9b
Compare
Added |
@dmatveev @smirnov-alexey Addressed comments, have a look, please :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
@TolyaTalamanov Please rebase the PR and fix conflicts. |
16fb9cb
to
5a0c85b
Compare
Done |
@asmorkalov It's ready to be merged |
Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.
Overview
The implementation is straight forward: replace
IE::InferRequest
toIInferExecutor
which encapsulates how execution should be started (Infer()
/StartAsync
). Every kind ofIInferExecutor
must notifyRequestPool
when execution finishes by using by using callback that was passed while instantiation (seem_notify()
) it helpsRequestPool
to understand which requests are working and which are idle.