Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compatibility of TensorRT optimized engine with deepstream-app #48

Closed
ahnjw72 opened this issue Jan 23, 2020 · 1 comment
Closed

Compatibility of TensorRT optimized engine with deepstream-app #48

ahnjw72 opened this issue Jan 23, 2020 · 1 comment

Comments

@ahnjw72
Copy link

ahnjw72 commented Jan 23, 2020

Thanks to the Demo #3: SSD, I've successfully made a TensorRT optimized 'engine' of SSD.
However, I got the following error message when I tried to use the engine with NVIDIA's deepstream-app (the deepstream-app can dedicate the TensorRT engine to be used in its pipeline) :

deepstream-app: nvdsiplugin_ssd.cpp:72: FlattenConcat::FlattenConcat(const void, size_t): Assertion `mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3' failed.
Aborted (core dumped)
*

I'm not familiar with the deepstream-app plugin structure and hope that any expert can explain what is the main cause of this problem and what I should do to use any TensorRT optimized engine in the deepstream pipeline.
(Before, I expected that the deepstream can perform inference by a user dedicated TensorRT optimized engine without any coding or plugin library building.. )

@jkjung-avt
Copy link
Owner

@ahnjw72 Thanks for your comment. But I'm sorry. I don't currently use DeepStream SDK, and I don't have the time to look into this. I suggest you to post the issue onto NVIDIA Developer Forum and seek help from NVIDIA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants