Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do inference for multiple images? #11

Closed
Justin-king-de opened this issue Nov 3, 2020 · 27 comments
Closed

How to do inference for multiple images? #11

Justin-king-de opened this issue Nov 3, 2020 · 27 comments

Comments

@Justin-king-de
Copy link

Hello @marcoslucianops .,Thank you for sharing your work. In MULTIPLE-INFERENCES.MD file, what is meant by primary inference and secondary inference? I mean what's the difference between them? And I want to run my tiny-yolov4 model on multiple images . How to do that? Thanks in advance.

@marcoslucianops
Copy link
Owner

Hello @marcoslucianops .,Thank you for sharing your work. In MULTIPLE-INFERENCES.MD file, what is meant by primary inference and secondary inference? I mean what's the difference between them? And I want to run my tiny-yolov4 model on multiple images . How to do that? Thanks in advance.

Hi,

Multiple inferences means more than one model to do inference on source (2 pre-trained yolo models making inference simultaneously or in cascade, for example).
I think deepstream can't read image as source file, it need be video file, câmera or stream.

@Justin-king-de
Copy link
Author

Justin-king-de commented Nov 4, 2020

Thanks @marcoslucianops for your reply. I read that there is a way to run deepstream inference on multiple images from a directory using multifilesrc. But however I am unable to implement it. Below is the link to that article:

https://forums.developer.nvidia.com/t/deepstream-image-decode-test-for-multiple-images-dynamically/108913

https://forums.developer.nvidia.com/t/deepstream-run-it-on-a-single-image/124946

Could you please look at the above links where it is talking about it and guide me in how to do inference for multiple images. It will be of great help to me. There is something called deepstream-image-decode-test. Could you please look into it. Thank you in advance

@marcoslucianops
Copy link
Owner

To run demo, you need put your model files (yolo.weights, yolo.cfg and config.txt) and images in deepstream-image-decode-test folder and do this commands

sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libgstrtspserver-1.0-dev libx11-dev
cd /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-image-decode-test/
make
mv dstest_image_decode_pgie_config.txt dstest_image_decode_pgie_config.bak
mv config.txt dstest_image_decode_pgie_config.txt
./deepstream-image-decode-app image1.jpg image2.jpg

If deepstream can't create yolo engine, create the engine using deepstream-app and move it to deepstream-image-decode-test folder.

@Justin-king-de
Copy link
Author

Thank you very much @marcoslucianops for your reply. I have some queries. You mentioned to move model files into deepstream-image-decode-test folder. In that you mentioned config.txt. Does it mean labels.txt?
And next, I am doing it for yolov4-tiny model. If I put yolov4-tiny model files, will it work? Because I think deepstream still doesn't have support for yolov4-tiny.
And I have followed the instructions in your repo for yolov4-tiny and created a yolov4-tiny tensorrt engine. If I put this engine file in that folder, will it work in case deepstream fails to create yolo engine? In that case, what should I do about deepstream_app_config_yoloV4_tiny.txt and config_infer_primary_yoloV4_tiny.txt?

@marcoslucianops
Copy link
Owner

marcoslucianops commented Nov 6, 2020

I forgot label.txt and nvdsinfer_custom_impl_Yolo folder. You need put them in deepstream-image-decode-test folder. The config.txt file is config_infer_primary_yolo.txt edited for your yolo files.

To use yolov4-tiny, you need see my tutorial.

If you have engine, you will not need weights and cfg file (for yolov4-tiny), you only need set engine path and labels path in config.txt file.

@Justin-king-de
Copy link
Author

Justin-king-de commented Nov 6, 2020

Now if I simply put this engine file along with labels.txt, config_infer_primary_yoloV4_tiny.txt , nvdsinfer_custom_impl_Yolo in deepstream-image-decode-test folder will it be okay?

Yes, but config_infer_primary_yoloV4_tiny.txt file needs be modified to your files.

Is deepstream_app_config_yoloV4_tiny.txt not needed?

No, because all deepstream configurations are in deepstream_image_decode_app.c file. If you want, you can edit it as your needed.

And do I need to compile nvdsinfer_custom_impl_Yolo again in that folder?

If it was compiled, you dont need compile again.

@Justin-king-de
Copy link
Author

@marcoslucianops Thanks for your patience and time. I will try the way you said . Will drop a message here if I face any issue.

@Justin-king-de
Copy link
Author

Hey @marcoslucianops . I followed all your steps and ran into the below error. I have put my tensorrt engine file along with labels.txt,config.txt and nvdsinfer_custom_impl_Yolo into the folder and followed your commands. The problem starts from the point where it says cant define yolo type from config file name.

Unknown or legacy key specified 'is-classifier' for group [property]
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: image1.jpg, image2.jpg,

Using winsys: x11 
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
0:00:04.977990604 15788   0x5566e76e70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/nano/pytorch-YOLOv4/deepstream-image-decode-test/yolov4-tiny_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x416x416       
1   OUTPUT kFLOAT boxes           2535x1x4        
2   OUTPUT kFLOAT confs           2535x16         

0:00:04.978145139 15788   0x5566e76e70 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1642> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:04.978180868 15788   0x5566e76e70 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1813> [UID = 1]: deserialized backend context :/home/nano/pytorch-YOLOv4/deepstream-image-decode-test/yolov4-tiny_fp16.engine failed to match config params, trying rebuild
0:00:05.021603059 15788   0x5566e76e70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
Yolo type is not defined from config file name:
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.022705374 15788   0x5566e76e70 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:05.022748604 15788   0x5566e76e70 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:05.022778866 15788   0x5566e76e70 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:05.023137519 15788   0x5566e76e70 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:05.023171374 15788   0x5566e76e70 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: dstest_image_decode_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest_image_decode_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

@marcoslucianops
Copy link
Owner

Rename config.txt file to dstest_image_decode_pgie_config.txt

@Justin-king-de
Copy link
Author

@marcoslucianops I did that before itself. Still it's not working. I followed all the steps you mentioned.

@marcoslucianops
Copy link
Owner

Can you send me log of deepstream (when you try to run)?

@Justin-king-de
Copy link
Author

Yeah @marcoslucianops . Below is the log for 2 images. I used the command ./deepstream-image-decode-app image1.jpg image2.jpg. And I followed all the steps you specified. Iam running it on nano . Does it has anything to do with the source and sink type?

Unknown or legacy key specified 'is-classifier' for group [property]
WARNING: Overriding infer-config batch-size (1) with number of sources (2)
Now playing: image1.jpg, image2.jpg,

Using winsys: x11 
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
0:00:04.977990604 15788   0x5566e76e70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/nano/pytorch-YOLOv4/deepstream-image-decode-test/yolov4-tiny_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x416x416       
1   OUTPUT kFLOAT boxes           2535x1x4        
2   OUTPUT kFLOAT confs           2535x16         

0:00:04.978145139 15788   0x5566e76e70 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1642> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:04.978180868 15788   0x5566e76e70 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1813> [UID = 1]: deserialized backend context :/home/nano/pytorch-YOLOv4/deepstream-image-decode-test/yolov4-tiny_fp16.engine failed to match config params, trying rebuild
0:00:05.021603059 15788   0x5566e76e70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
Yolo type is not defined from config file name:
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.022705374 15788   0x5566e76e70 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:05.022748604 15788   0x5566e76e70 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:05.022778866 15788   0x5566e76e70 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:05.023137519 15788   0x5566e76e70 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:05.023171374 15788   0x5566e76e70 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: dstest_image_decode_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest_image_decode_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

@Justin-king-de
Copy link
Author

Justin-king-de commented Nov 12, 2020

@marcoslucianops This is the log if I use one image instead of 2 images for inference. The command I used is ./deepstream-image-decode-app image1.jpg . The log in the above comment is when I try for 2 images. Boths log are different.
And in this, how to set source and sink as we don't have deepstream_app_config_yoloV4_tiny.txt file here? And how to save the output. I think we need to make changes in deepstream_image_decode_app.c file. May be this error is arising from that. Could you look at that please?

Unknown or legacy key specified 'is-classifier' for group [property]
Now playing: image1.jpg,

Using winsys: x11 
Opening in BLOCKING MODE 
0:00:05.019799990  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/mtx/pytorch-YOLOv4/deepstream/deepstream-image-decode-test/yolov4-tiny_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x416x416       
1   OUTPUT kFLOAT boxes           2535x1x4        
2   OUTPUT kFLOAT confs           2535x16         

0:00:05.019951087  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/mtx/pytorch-YOLOv4/deepstream/deepstream-image-decode-test/yolov4-tiny_fp16.engine
0:00:05.204766868  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest_image_decode_pgie_config.txt sucessfully
Running...
NvMMLiteBlockCreate : Block : BlockType = 256 
[JPEG Decode] BeginSequence Display WidthxHeight 2176x1384
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
0:00:05.713651531  4448   0x559899b400 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:05.713703198  4448   0x559899b400 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1975): gst_nvinfer_output_loop (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Returned, stopping playback
[JPEG Decode] NvMMLiteJPEGDecBlockPrivateClose done
[JPEG Decode] NvMMLiteJPEGDecBlockClose done
Deleting pipeline

@marcoslucianops
Copy link
Owner

@marcoslucianops This is the log if I use one image instead of 2 images for inference. The command I used is ./deepstream-image-decode-app image1.jpg . The log in the above comment is when I try for 2 images. Boths log are different.
And in this, how to set source and sink as we don't have deepstream_app_config_yoloV4_tiny.txt file here? And how to save the output. I think we need to make changes in deepstream_image_decode_app.c file. May be this error is arising from that. Could you look at that please?

Unknown or legacy key specified 'is-classifier' for group [property]
Now playing: image1.jpg,

Using winsys: x11 
Opening in BLOCKING MODE 
0:00:05.019799990  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/mtx/pytorch-YOLOv4/deepstream/deepstream-image-decode-test/yolov4-tiny_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x416x416       
1   OUTPUT kFLOAT boxes           2535x1x4        
2   OUTPUT kFLOAT confs           2535x16         

0:00:05.019951087  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/mtx/pytorch-YOLOv4/deepstream/deepstream-image-decode-test/yolov4-tiny_fp16.engine
0:00:05.204766868  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest_image_decode_pgie_config.txt sucessfully
Running...
NvMMLiteBlockCreate : Block : BlockType = 256 
[JPEG Decode] BeginSequence Display WidthxHeight 2176x1384
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
0:00:05.713651531  4448   0x559899b400 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:05.713703198  4448   0x559899b400 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1975): gst_nvinfer_output_loop (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Returned, stopping playback
[JPEG Decode] NvMMLiteJPEGDecBlockPrivateClose done
[JPEG Decode] NvMMLiteJPEGDecBlockClose done
Deleting pipeline

The error in this case, is because you need make yolov4.engine with batch-size = number or images. For 2 images, you need use batch-size=2.

@marcoslucianops This is the log if I use one image instead of 2 images for inference. The command I used is ./deepstream-image-decode-app image1.jpg . The log in the above comment is when I try for 2 images. Boths log are different.
And in this, how to set source and sink as we don't have deepstream_app_config_yoloV4_tiny.txt file here? And how to save the output. I think we need to make changes in deepstream_image_decode_app.c file. May be this error is arising from that. Could you look at that please?

Unknown or legacy key specified 'is-classifier' for group [property]
Now playing: image1.jpg,

Using winsys: x11 
Opening in BLOCKING MODE 
0:00:05.019799990  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/mtx/pytorch-YOLOv4/deepstream/deepstream-image-decode-test/yolov4-tiny_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x416x416       
1   OUTPUT kFLOAT boxes           2535x1x4        
2   OUTPUT kFLOAT confs           2535x16         

0:00:05.019951087  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/mtx/pytorch-YOLOv4/deepstream/deepstream-image-decode-test/yolov4-tiny_fp16.engine
0:00:05.204766868  4448   0x5598ef7520 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest_image_decode_pgie_config.txt sucessfully
Running...
NvMMLiteBlockCreate : Block : BlockType = 256 
[JPEG Decode] BeginSequence Display WidthxHeight 2176x1384
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
0:00:05.713651531  4448   0x559899b400 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:05.713703198  4448   0x559899b400 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1975): gst_nvinfer_output_loop (): /GstPipeline:dstest-image-decode-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Returned, stopping playback
[JPEG Decode] NvMMLiteJPEGDecBlockPrivateClose done
[JPEG Decode] NvMMLiteJPEGDecBlockClose done
Deleting pipeline

In this case, it seems a sink error. But to edit it, you need more skills in deepstream. I will test in my board to see what's needed to run.

@Justin-king-de
Copy link
Author

Hi @marcoslucianops .As you have rightly mentioned, for 2 images, my yolov4-tiny engine was created with batch size1 which is why that error was coming. But for one image it should have worked. As you said, for one image there seems to be some problem with sink.It will be helpful to me if you can solve this out . And also how to save an output image after inference? In case of a video, we can specify output in config file. But here there is no such config file.

@marcoslucianops
Copy link
Owner

marcoslucianops commented Nov 17, 2020

Hi @Justin-king-de.

I tested today the deepstream-app and deepstream_image_decode_app for you case. Based in your need, you should use the deepstream_image_decode_app with dsexample implementation to process multiple images and save processed images. I can't teach you here how to use/add dsexample, beacuse it's a long way, but this link will help you (I recommend you look at nvidia developer forum how to implement dsexample pipeline and edit custom apps).

https://forums.developer.nvidia.com/t/how-to-crop-the-image-and-save/80174#5375174

To simple usage, use deepstream-app. You can set this configuration for source and sink in deepstream_app_config_yoloV4_tiny.txt file (but it only works for one image per time).

[source0]
enable=1
type=2
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

And run this command

deepstream-app -c deepstream_app_config_yoloV4_tiny.txt -i /path/of/your/image/image.jpg

Process and save processed videos/streams is easiler than process images (in DeepStream).

@Justin-king-de
Copy link
Author

Ok @marcoslucianops . I will look at it. Thanks for your time.

@marcoslucianops
Copy link
Owner

marcoslucianops commented Nov 23, 2020

@Justin-king-de, this post may help you.

#12 (comment)

@Justin-king-de
Copy link
Author

ok @marcoslucianops . Thank you.

@huytranvan2010
Copy link

I can run inference in PC but can not on Jetson Xavier with the same config. I got an error

(deepstream-app:60096): GLib-GObject-WARNING **: 18:19:50.534: g_object_set_is_valid_property: object class 'GstNvJpegDec' has no property named 'DeepStream'
** INFO: <bus_callback:225>: Pipeline running

ERROR from typefind: Internal data stream error.
Debug info: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason not-negotiated (-4)
nvstreammux: Successfully handled EOS for source_id=0
** INFO: <bus_callback:262>: Received EOS. Exiting ...

Quitting
App run failed

I tried to using Jetpack 5.1 and Jetpack 5.1.1, but error was there. @marcoslucianops please give me an advice? Thanks.

Here are my config.
For deeepstream.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
gie-kitti-output-dir=/home/jetson/old_repo/DeepStream-Yolo/output_data

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#type=3 - video
type=2
#uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
#uri=file:///home/jetson/old_repo/DeepStream-Yolo/0.jpg
#num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
#type=2 - display 
type=1
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV7_1_image.txt

[tests]
file-loop=0

For config_infer_primary_yoloV7_1_image.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov7.cfg
model-file=yolov7.wts
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

Engine model is generated before model_b1_gpu0_fp32.engine, so it deserialized immediately (not build again).

@marcoslucianops
Copy link
Owner

In the deeepstream.txt file both of the uri are commented. You need to set one of them.

#uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
#uri=file:///home/jetson/old_repo/DeepStream-Yolo/0.jpg

Your config_infer_primary_yoloV7_1_image.txt file is incorrect. The updated files form the repo doesn't work with the wts and cfg files.

@marcoslucianops
Copy link
Owner

Another thing is: the deepstream-app doesn't work well with images. You need to create your own code with the correct plugins to have better support.

@huytranvan2010
Copy link

In the deeepstream.txt file both of the uri are commented. You need to set one of them.

#uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
#uri=file:///home/jetson/old_repo/DeepStream-Yolo/0.jpg

Your config_infer_primary_yoloV7_1_image.txt file is incorrect. The updated files form the repo doesn't work with the wts and cfg files.

Sorry, I uncommented line for uri of images, but it doesn't work. I use old version of the repo. I run inference on 1 image because I want to save predictions, so that I can generate coco file from many images to calculate mAP.

@marcoslucianops
Copy link
Owner

I run inference on 1 image because I want to save predictions, so that I can generate coco file from many images to calculate mAP.

To run inference on images it's better to use jpegdec plugin (GStreamer) instead nvjpegdec plugin (NVIDIA) but you can't set it on deepstream-app. The nvjpegdec has issues in some cases depeding on the width and height of the image.

@huytranvan2010
Copy link

I run inference on 1 image because I want to save predictions, so that I can generate coco file from many images to calculate mAP.

To run inference on images it's better to use jpegdec plugin (GStreamer) instead nvjpegdec plugin (NVIDIA) but you can't set it on deepstream-app. The nvjpegdec has issues in some cases depeding on the width and height of the image.

Thanks. It turn out that my images has format that is not supported in current Gstreamer. Nvidia will fix in the next release.

@huytranvan2010
Copy link

I run inference on 1 image because I want to save predictions, so that I can generate coco file from many images to calculate mAP.

To run inference on images it's better to use jpegdec plugin (GStreamer) instead nvjpegdec plugin (NVIDIA) but you can't set it on deepstream-app. The nvjpegdec has issues in some cases depeding on the width and height of the image.

@marcoslucianops Could you share pipeline in Deepstream Python Apps to run inference images from COCO to evaluate engine model? Any suggestions is worth for me. Thanks.

@marcoslucianops
Copy link
Owner

filesrc/multifilesrc -> jpegdec -> nvvideoconvert -> nvstreammux -> nvinfer -> fakesink

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants