New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The C++ Inference Library of OpenPose-Plus v2.0.0 #249
Conversation
…l fixes required.
The documentation comments and conflicts are solved. |
@lgarithm I saw your comments about OpenCV on the CI scripts. Is there a problem when building OpenCV in docker? Is this helpful? https://hub.docker.com/r/schickling/opencv |
The CI is failing because CUDA become a required dependency. |
There are two options to fix: |
@lgarithm I think for the best, we'd better select option 1. Because "fake_openpose_plus" may not simulate the real situation of compiling on GPU. Can we get this done by using |
I removed |
Thanks to @Gyx-One , we successfully converted the official OpenPose model to ONNX format which enables TensorRT execution. We firstly got 20 fps by inferring 20 batched images. |
@ganler could you make files end with '\n' |
Features
Operator API
: For users to manage detailed DNN inference and post-processing. (~90 FPS in video processing)Stream API
: Based onOperator API
, we made a scheduler to do stream processing, provide end2end video processing. You can have it with very simple C++ stream operators like<<
or>>
to set the input/output streams. (~120FPS in video processing).The performance was tested on 1070 Ti with 6 physical core CPU.
WIP
via Breathe.Looking for preliminary reviews and contributions.