Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serialization Error in verifyHeader: 0 (Magic tag does not match) #3156

Closed
HoangTienDuc opened this issue Jul 16, 2021 · 7 comments
Closed
Labels
question Further information is requested

Comments

@HoangTienDuc
Copy link

i am successful deploy my dali model on triton.
After that, i swicth to deploy my dali model on deepstream-triton.
i create dali model inside my deepstream-trion 5.1.21.02-triton, but it still does not match tag.

E0705 08:34:25.996654 18559 logging.cc:43] coreReadArchive.cpp (32) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
E0705 08:34:25.996738 18559 logging.cc:43] INVALID_STATE: std::exception
E0705 08:34:25.996744 18559 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
W0705 08:34:25.996751 18559 autofill.cc:225] Autofiller failed to detect the platform for retinaface_preprocess (verify contents of model directory or use --log-verbose=1 for more details)
W0705 08:34:25.996756 18559 autofill.cc:248] Proceeding with simple config for now
I0705 08:34:25.997093 18559 model_repository_manager.cc:810] loading: retinaface_preprocess:1
E0705 08:34:26.007827 18559 model_repository_manager.cc:986] failed to load 'retinaface_preprocess' version 1: Not found: unable to load backend library: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: cannot allocate memory in static TLS block
ERROR: infer_trtis_server.cpp:1044 Triton: failed to load model retinaface_preprocess, triton_err_str:Invalid argument, err_msg:load failed for model 'retinaface_preprocess': version 1: Not found: unable to load backend library: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: cannot allocate memory in static TLS block;

ERROR: infer_trtis_backend.cpp:45 failed to load model: retinaface_preprocess, nvinfer error:NVDSINFER_TRTIS_ERROR
ERROR: infer_trtis_backend.cpp:184 failed to initialize backend while ensuring model:retinaface_preprocess ready, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:09.895479387 18559      0x408dd20 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 5]: Error in createNNBackend() <infer_trtis_context.cpp:246> [UID = 5]: failed to initialize trtis backend for model:retinaface_preprocess, nvinfer error:NVDSINFER_TRTIS_ERROR
I0705 08:34:26.008037 18559 server.cc:280] Waiting for in-flight requests to complete.
I0705 08:34:26.008055 18559 server.cc:295] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
0:00:09.895599365 18559      0x408dd20 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 5]: Error in initialize() <infer_base_context.cpp:81> [UID = 5]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:09.895611735 18559      0x408dd20 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<primary-inference> error: Failed to initialize InferTrtIsContext
0:00:09.895617385 18559      0x408dd20 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<primary-inference> error: Config file path: /data/deepstream-retinaface/dstest_ssd_nopostprocess.txt
0:00:09.895959470 18559      0x408dd20 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary-inference> error: gstnvinferserver_impl start failed
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(439): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: /data/deepstream-retinaface/dstest_ssd_nopostprocess.txt
@klecki
Copy link
Contributor

klecki commented Jul 16, 2021

Hi @HoangTienDuc,
I am not sure if the error is related to DALI.
This Stack Overflow thread suggests that it's rather related to TensorRT.

I suggest to check if the TensorRT versions used to create your model and present in deepstream-triton are compatible.

@HoangTienDuc
Copy link
Author

i tested my dali model on triton inference server.
it run sucessfully. So i believe that there are no error in my dali model.
i also use the same version between triton inference server and deepstream-triton (i mean triton vesion 20.11-py3) tag
i am wondering that could deepstream-triton support dali?

@klecki klecki added the question Further information is requested label Jul 16, 2021
@klecki
Copy link
Contributor

klecki commented Jul 17, 2021

I checked with deepstream and Triton developers.
Deepstream triton build(x86 only) is built on top of Triton image. So it supports DALI Backend as well.

deepstream-trion:5.1.21.02-triton is based on triton:20.11 - this is probably the reason for the TensorRT version mismatch, as it is quite old. (the error you pasted is not related to DALI)

If you want to use more recent Triton you can try DS6.0EA version which is on top of Triton 21.02

@HoangTienDuc
Copy link
Author

thank for your guide. is DS6.0EA available?
i checked on nvidia ngc, these is no recent verion for deepstream.

@klecki
Copy link
Contributor

klecki commented Jul 19, 2021

I suspect that the 0EA may mean early access, I will ask when it will be available and get back to you.

@klecki
Copy link
Contributor

klecki commented Jul 19, 2021

Yes, this is an early access version, I don't know official release date for the final DS6.

What you can do in the meantime is to install the DeepStream package on top of Triton Server on your own. The Triton from 20.11 to 21.02 should be compatible as far as I'm told.

@HoangTienDuc
Copy link
Author

HoangTienDuc commented Jul 20, 2021

thank you very much

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants