-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lavfi/dnn: Batch Execution in TensorFlow Backend #427
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit adds an async execution mechanism for common use in the TensorFlow and Native backends. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commits refactors the get async result function for common use in all three backends. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit adds the documentation of typedefs and functions in the async module for common use in DNN backends. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit adds a function for execution of TFInferRequest and documentation for functions related to TFInferRequest. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit enables async execution in the TensorFlow backend and adds function to flush extra frames. The async execution mechanism executes the TFInferRequests on a detached thread. The following is the comparison of this mechanism with the existing sync mechanism on TensorFlow C API 2.5 GPU variant. Async Mode: 0m57.064s Sync Mode: 1m1.959s The above was performed on super resolution filter using ESPCN model. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This patch adds error handling for cases where the execute_model_tf fails, clears the used memory in the TFRequestItem and finally pushes it back to the request queue. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Since requests are running in parallel, there is inconsistency in the status of the execution. To resolve it, we avoid using mutex as it would result in single TF_Session running at a time. So add TF_Status to the TFRequestItem Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
The frame allocation and filling the TaskItem with execution parameters is common in the three backends. This commit shifts this logic to dnn_backend_common. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit unifies the async and sync inference mechanism in the DNN module. For now, the execution is disabled in all three backends temporarily. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit unifies the inference functions in the TensorFlow backend and introduces async flag in the TFOptions to be used to switch between the modes. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit unifies the execution functions in the OpenVINO backend and introduces async flag in the OVOptions to be used to select the execution mode. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit rearranges the code in Native Backend to use the TaskItem for inference and enables the unified inference in the backend. It also adds flush function as required in the unified mechanism. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Remove async flag from filter's perspective after the unification of async and sync modes in the DNN backend. Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Add batch execution to the TensorFlow backend Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Close. No activity. Feel free to reopen if needed, after fixing conflicts. |
Hello @uartie, can you reopen this PR? I don't seem to find any reopen button here. Thank you. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
TODO
This pull request will be reopened after pull request #423 gets merged.
Patch Set Description
This patchset is a part of optional deliverables in the GSoC project Async Support for TensorFlow Backend in FFmpeg.
Objective: Implements batch execution in the TensorFlow backend
Relevant Patches in the PR
790eac3 lavfi/dnn_backend_tf: Batch Execution Support