Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TRT inference error #15

Closed
rnekk2 opened this issue Oct 9, 2020 · 3 comments
Closed

TRT inference error #15

rnekk2 opened this issue Oct 9, 2020 · 3 comments

Comments

@rnekk2
Copy link

rnekk2 commented Oct 9, 2020

[TensorRT] ERROR: INVALID_ARGUMENT: Cannot deserialize with an empty memory buffer.
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File "test_video.py", line 33, in
detector = TensorRTRetinaFace(input_imshape,inference_imshape)
File "/data/DSFD-Pytorch-Inference/face_detection/retinaface/tensorrt_wrap.py", line 38, in init
self.context = self.engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'

Seeing issues running inference using tensorrt. How to fix it.

Tensorrt version 7.1.3
Torch version - 1.4.0

@hukkelas
Copy link
Owner

Hi, you were not able to build an engine.

Sadly, I do not have the capacity to help you debug this issue with tensorRT, as this is not something I'm not that experienced with.
If you are not familiar with TensorRT, I recommend you to use the default pytorch version.

@JohannesTK
Copy link

@hukkelas what is the Tensorrt version you are using?

@rnekk2
Copy link
Author

rnekk2 commented Oct 14, 2020

The issue is fixed by increasing the workspace size. I am using TRT 7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants