-
-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiple inference problem #17
Comments
config_infer_primary.txt[property] custom-network-config=yolov3_person.cfg batch-size=1 [class-attrs-all] |
config_infer_secondary1.txt[property] custom-network-config=custom_yolov4_helmet.cfg batch-size=16 [class-attrs-all] |
deepstream_app_config.txt[application] [tiled-display] [source0] num-sources=1 [sink0] [osd] [streammux] [primary-gie] [secondary-gie0] [tests] |
I will test this today |
Hi @XiangjiBU, sorry for the delay. I found the problem, and I updated repo. Thanks. |
I tried new repo, and config exactly follow "multipleinference.md", but still not work |
|
thanks, I tried again, but still find 2 issues:
|
cluster-mode is used to set which NMS mode will be used in DeepStream. In my code, NSM function is added to nvdsparsebbox_Yolo.cpp for YOLO models v3 and v4. Using cluster-mode=2, you will add another NMS after coded NMS, therefore, is better to use cluster-mode=4. To decrease number of bboxs, you need to increase pre-cluster-threshold, where 0 is 0% and 1.0 is 100% of confidence to show bbox.
I believe it will work because the code is the same for all models. It only differs in kernel, where it calls different functions for each model. I tested only with YOLOv4, but I will do future tests with other models. |
Hi @XiangjiBU Please see my multipleInferences.md again. I reverted files and updated them. Now you can use different versions/models with separated gie's folders without errors (especially see Editing yoloPlugin.h section). |
it works, THX !!! |
hi, Thanks for sharing !
I ran multiple inference on Jetson Xavier (Jetpack 4.4), but no result detected. terminal print as follows..
I tested the 2 models used, each of them works well standalone.
Using winsys: x11
Deserialize yoloLayer plugin: yolo_99
Deserialize yoloLayer plugin: yolo_108
Deserialize yoloLayer plugin: yolo_117
0:00:03.522306324 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 2]: deserialized trt engine from :/home/admin123/deepstream/DeepStream-Yolo/native/model_b16_gpu0_fp16_helmet.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT data 3x416x416
1 OUTPUT kFLOAT yolo_99 24x52x52
2 OUTPUT kFLOAT yolo_108 24x26x26
3 OUTPUT kFLOAT yolo_117 24x13x13
0:00:03.522553823 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: /home/admin123/deepstream/DeepStream-Yolo/native/model_b16_gpu0_fp16_helmet.engine
0:00:03.533651338 30756 0x7f3c002380 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 2]: Load new model:/home/admin123/deepstream/DeepStream-Yolo/examples/multiple_inferences/sgie1/config_infer_secondary1.txt sucessfully
Deserialize yoloLayer plugin: yolo_51
Deserialize yoloLayer plugin: yolo_59
0:00:03.886455896 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/admin123/deepstream/DeepStream-Yolo/native/model_b1_gpu0_fp16_personv3.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT data 3x416x416
1 OUTPUT kFLOAT yolo_51 18x13x13
2 OUTPUT kFLOAT yolo_59 18x26x26
0:00:03.886608479 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/admin123/deepstream/DeepStream-Yolo/native/model_b1_gpu0_fp16_personv3.engine
0:00:03.888024542 30756 0x7f3c002380 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/admin123/deepstream/DeepStream-Yolo/examples/multiple_inferences/pgie/config_infer_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
** INFO: <bus_callback:181>: Pipeline ready
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
The text was updated successfully, but these errors were encountered: