Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The C++ Inference Library of OpenPose-Plus v2.0.0 #249

Merged
merged 73 commits into from May 31, 2020
Merged

Conversation

ganler
Copy link
Contributor

@ganler ganler commented May 5, 2020

Features

  • New abstraction with better performance
    • Operator API: For users to manage detailed DNN inference and post-processing. (~90 FPS in video processing)
    • Stream API: Based on Operator API, we made a scheduler to do stream processing, provide end2end video processing. You can have it with very simple C++ stream operators like << or >> to set the input/output streams. (~120FPS in video processing).
  • ONNX model parser support.
  • Documentation.

The performance was tested on 1070 Ti with 6 physical core CPU.

WIP

  • Documentation.
    • Doxygen Comments.
    • Doxygen to Sphinx via Breathe.
    • Deployment.
  • Post-Processing.
    • Proposal Network(NMS).
  • Benchmark over new models.
    • OpenPose model.
    • Lightweight OpenPose.
    • Proposal Network
  • MISC.
    • Tests on ONNX parser.
    • Add serialized TensorRT Engine file.
    • Fix merge conflicts
    • CIs.
    • Ensure the scripts and docker running.
    • Provide a end2end simple docker application to generate a processed video.

Looking for preliminary reviews and contributions.

@ganler
Copy link
Contributor Author

ganler commented May 10, 2020

The documentation comments and conflicts are solved.
Do you have some time to look at the Travis CI? It seems to be a problem with CUDA.

@ganler
Copy link
Contributor Author

ganler commented May 10, 2020

@lgarithm I saw your comments about OpenCV on the CI scripts. Is there a problem when building OpenCV in docker? Is this helpful? https://hub.docker.com/r/schickling/opencv

@lgarithm
Copy link
Member

The CI is failing because CUDA become a required dependency.
We used to have a stub (fake_openpose_plus.cpp) for testing purpose, which is now removed.

@lgarithm
Copy link
Member

There are two options to fix:
1). use container with tensorRT and CUDA for CI.
2). revert fake_openpose_plus.cpp

@ganler
Copy link
Contributor Author

ganler commented May 11, 2020

@lgarithm I think for the best, we'd better select option 1. Because "fake_openpose_plus" may not simulate the real situation of compiling on GPU. Can we get this done by using nvidia-docker? (I'm not very familiar with docker. :(

@ganler
Copy link
Contributor Author

ganler commented May 11, 2020

I removed fake_openpose_plus because the older APIs are deprecated.

@ganler
Copy link
Contributor Author

ganler commented May 21, 2020

Thanks to @Gyx-One , we successfully converted the official OpenPose model to ONNX format which enables TensorRT execution. We firstly got 20 fps by inferring 20 batched images.

@ganler ganler changed the title (WIP) The C++ Inference Library of OpenPose-Plus v2.0.0 The C++ Inference Library of OpenPose-Plus v2.0.0 May 31, 2020
@lgarithm
Copy link
Member

@ganler could you make files end with '\n'

@ganler ganler merged commit 6fb31bf into master May 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants