Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use camera with yolo and deepstream #2

Closed
elementdl opened this issue Apr 3, 2020 · 3 comments
Closed

use camera with yolo and deepstream #2

elementdl opened this issue Apr 3, 2020 · 3 comments

Comments

@elementdl
Copy link

Hi @marcoslucianops ,

I want to test Yolov3-tiny with a camera plug and play. In deepstream_app_config I changed the type to 1 and I got some errors, it says it failed to creat_camera_source_bin.

Do you know how to use yolov3-tiny with a simple usb camera?

For now I just started to use deepstream, by any way do you know if is there some tutorial to begin with deepstream?

Sincerely,

@marcoslucianops
Copy link
Owner

Do you know how to use yolov3-tiny with a simple usb camera?

Try to use usb camera changing [source0] lines to

[source0]
enable=1
type=1
# Resolution
camera-width=1280
camera-height=720
# FPS
camera-fps-n=30
camera-fps-d=1
# /dev/video0
camera-v4l2-dev-node=0

For now I just started to use deepstream, by any way do you know if is there some tutorial to begin with deepstream?

I don't know any tutorials. To use deepstream according to your need, you need to edit C/C++ or Python deepstream app code (deepstream-test1, deepstream-test2, etc.) or edit gst-dsexample (C/C++).

@elementdl
Copy link
Author

Thanks a lot it works!

For example how did you know that I had to add these modification?
Is there a code somewhere who explain how to manipulate deepstream like this? (in deepstream-test1, deepstream-test2,...?)

Also, it is true that the programm is working really fast, but with Yolov3-tiny not modified it seems that it doesn't work so well : I tried to show myself with a webcam and the bounding boxes that recognize a person don't stay on the display like the program don't really recognize me. Is it normal? I followed all your steps on github, I just add the weight of yolotinyv3 directly from darknet.

Also i'm a begginer in computer vision and object recognition. What would you recommand for a begginer to practice and understand this domain?

@marcoslucianops
Copy link
Owner

For example how did you know that I had to add these modification?

I was seeing several topics in the NVIDIA forum to learn.

Is there a code somewhere who explain how to manipulate deepstream like this? (in deepstream-test1, deepstream-test2,...?)

You need manipulate tiler_src_pad_buffer_probe function to do your need. This function receives Frame Meta, Object Meta, etc.

Also, it is true that the programm is working really fast, but with Yolov3-tiny not modified it seems that it doesn't work so well : I tried to show myself with a webcam and the bounding boxes that recognize a person don't stay on the display like the program don't really recognize me. Is it normal? I followed all your steps on github, I just add the weight of yolotinyv3 directly from darknet.

Try to edit nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp (lines 376-377) from:

        //{0, 1, 2}}; // as per output result, select {1,2,3}
        {1, 2, 3}};

To:

        {0, 1, 2}};

And see if it makes any difference.

In my custom model, I have good accuracy, but I haven't properly tested the yolov3-tiny darknet model.

Also i'm a begginer in computer vision and object recognition. What would you recommand for a begginer to practice and understand this domain?

I'm a beginner in computer vision too. I recommend that you see the NVIDIA forum and docs to learn more.

(Sorry for any English error, it's not my native language)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants