-
-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to do inference for multiple images? #11
Comments
Hi, Multiple inferences means more than one model to do inference on source (2 pre-trained yolo models making inference simultaneously or in cascade, for example). |
Thanks @marcoslucianops for your reply. I read that there is a way to run deepstream inference on multiple images from a directory using multifilesrc. But however I am unable to implement it. Below is the link to that article: https://forums.developer.nvidia.com/t/deepstream-run-it-on-a-single-image/124946 Could you please look at the above links where it is talking about it and guide me in how to do inference for multiple images. It will be of great help to me. There is something called deepstream-image-decode-test. Could you please look into it. Thank you in advance |
To run demo, you need put your model files (yolo.weights, yolo.cfg and config.txt) and images in deepstream-image-decode-test folder and do this commands
If deepstream can't create yolo engine, create the engine using deepstream-app and move it to deepstream-image-decode-test folder. |
Thank you very much @marcoslucianops for your reply. I have some queries. You mentioned to move model files into deepstream-image-decode-test folder. In that you mentioned config.txt. Does it mean labels.txt? |
I forgot label.txt and nvdsinfer_custom_impl_Yolo folder. You need put them in deepstream-image-decode-test folder. The config.txt file is config_infer_primary_yolo.txt edited for your yolo files. To use yolov4-tiny, you need see my tutorial. If you have engine, you will not need weights and cfg file (for yolov4-tiny), you only need set engine path and labels path in config.txt file. |
Yes, but config_infer_primary_yoloV4_tiny.txt file needs be modified to your files.
No, because all deepstream configurations are in deepstream_image_decode_app.c file. If you want, you can edit it as your needed.
If it was compiled, you dont need compile again. |
@marcoslucianops Thanks for your patience and time. I will try the way you said . Will drop a message here if I face any issue. |
Hey @marcoslucianops . I followed all your steps and ran into the below error. I have put my tensorrt engine file along with labels.txt,config.txt and nvdsinfer_custom_impl_Yolo into the folder and followed your commands. The problem starts from the point where it says cant define yolo type from config file name.
|
Rename config.txt file to dstest_image_decode_pgie_config.txt |
@marcoslucianops I did that before itself. Still it's not working. I followed all the steps you mentioned. |
Can you send me log of deepstream (when you try to run)? |
Yeah @marcoslucianops . Below is the log for 2 images. I used the command
|
@marcoslucianops This is the log if I use one image instead of 2 images for inference. The command I used is
|
The error in this case, is because you need make yolov4.engine with batch-size = number or images. For 2 images, you need use batch-size=2.
In this case, it seems a sink error. But to edit it, you need more skills in deepstream. I will test in my board to see what's needed to run. |
Hi @marcoslucianops .As you have rightly mentioned, for 2 images, my yolov4-tiny engine was created with batch size1 which is why that error was coming. But for one image it should have worked. As you said, for one image there seems to be some problem with sink.It will be helpful to me if you can solve this out . And also how to save an output image after inference? In case of a video, we can specify output in config file. But here there is no such config file. |
Hi @Justin-king-de. I tested today the deepstream-app and deepstream_image_decode_app for you case. Based in your need, you should use the deepstream_image_decode_app with dsexample implementation to process multiple images and save processed images. I can't teach you here how to use/add dsexample, beacuse it's a long way, but this link will help you (I recommend you look at nvidia developer forum how to implement dsexample pipeline and edit custom apps). https://forums.developer.nvidia.com/t/how-to-crop-the-image-and-save/80174#5375174 To simple usage, use deepstream-app. You can set this configuration for source and sink in deepstream_app_config_yoloV4_tiny.txt file (but it only works for one image per time).
And run this command
Process and save processed videos/streams is easiler than process images (in DeepStream). |
Ok @marcoslucianops . I will look at it. Thanks for your time. |
@Justin-king-de, this post may help you. |
ok @marcoslucianops . Thank you. |
I can run inference in PC but can not on Jetson Xavier with the same config. I got an error
I tried to using Jetpack 5.1 and Jetpack 5.1.1, but error was there. @marcoslucianops please give me an advice? Thanks. Here are my config.
For
Engine model is generated before |
In the
Your |
Another thing is: the |
Sorry, I uncommented line for uri of images, but it doesn't work. I use old version of the repo. I run inference on 1 image because I want to save predictions, so that I can generate coco file from many images to calculate mAP. |
To run inference on images it's better to use |
Thanks. It turn out that my images has format that is not supported in current Gstreamer. Nvidia will fix in the next release. |
@marcoslucianops Could you share pipeline in Deepstream Python Apps to run inference images from COCO to evaluate engine model? Any suggestions is worth for me. Thanks. |
filesrc/multifilesrc -> jpegdec -> nvvideoconvert -> nvstreammux -> nvinfer -> fakesink |
Hello @marcoslucianops .,Thank you for sharing your work. In
MULTIPLE-INFERENCES.MD
file, what is meant by primary inference and secondary inference? I mean what's the difference between them? And I want to run my tiny-yolov4 model on multiple images . How to do that? Thanks in advance.The text was updated successfully, but these errors were encountered: