Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lavfi/dnn: Batch Execution in TensorFlow Backend #427

Closed
wants to merge 14 commits into from

Conversation

shubhanshu02
Copy link
Contributor

@shubhanshu02 shubhanshu02 commented Jul 15, 2021

TODO

This pull request will be reopened after pull request #423 gets merged.

Patch Set Description

This patchset is a part of optional deliverables in the GSoC project Async Support for TensorFlow Backend in FFmpeg.

Objective: Implements batch execution in the TensorFlow backend

Relevant Patches in the PR

790eac3 lavfi/dnn_backend_tf: Batch Execution Support

This commit adds an async execution mechanism for common use
in the TensorFlow and Native backends.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commits refactors the get async result function for common
use in all three backends.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit adds the documentation of typedefs and functions in
the async module for common use in DNN backends.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit adds a function for execution of TFInferRequest and documentation
for functions related to TFInferRequest.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit enables async execution in the TensorFlow backend
and adds function to flush extra frames.

The async execution mechanism executes the TFInferRequests on
a detached thread. The following is the comparison of this
mechanism with the existing sync mechanism on TensorFlow C API
2.5 GPU variant.

Async Mode: 0m57.064s
Sync Mode: 1m1.959s

The above was performed on super resolution filter using ESPCN model.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This patch adds error handling for cases where the execute_model_tf
fails, clears the used memory in the TFRequestItem and finally pushes
it back to the request queue.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Since requests are running in parallel, there is inconsistency in
the status of the execution. To resolve it, we avoid using mutex
as it would result in single TF_Session running at a time. So add
TF_Status to the TFRequestItem

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
The frame allocation and filling the TaskItem with execution
parameters is common in the three backends. This commit shifts
this logic to dnn_backend_common.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit unifies the async and sync inference mechanism
in the DNN module. For now, the execution is disabled in all
three backends temporarily.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit unifies the inference functions in the TensorFlow backend
and introduces async flag in the TFOptions to be used to switch between
the modes.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit unifies the execution functions in the OpenVINO backend
and introduces async flag in the OVOptions to be used to select the
execution mode.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit rearranges the code in Native Backend to use the TaskItem
for inference and enables the unified inference in the backend. It also
adds flush function as required in the unified mechanism.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Remove async flag from filter's perspective after the unification
of async and sync modes in the DNN backend.

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Add batch execution to the TensorFlow backend

Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
@uartie
Copy link
Collaborator

uartie commented Aug 12, 2021

Close. No activity. Feel free to reopen if needed, after fixing conflicts.

@uartie uartie closed this Aug 12, 2021
@shubhanshu02
Copy link
Contributor Author

Hello @uartie, can you reopen this PR? I don't seem to find any reopen button here. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants