-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Play video with Jetson-inference #104
Comments
Hi, we would need to replace gstCamera class with similar gstreamer class that read video file instead of camera...
On Jul 12, 2017 4:45 AM, curiouser001 <notifications@github.com> wrote:
Hi.
How can we use the video file instead of the image we took from the camera to implement DetectNet?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#104>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AOpDKwDQqwUeXy7CDy3Ff8WU4XtsbFNRks5sNIengaJpZM4OVT3V>.
…-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
|
Thank you, I will try. |
I was able to do this by passing in an additional command line argument with a path to a video file and then using a filesrc instead of the nvcamera src. You'll have to modify the create, init and buildlauhchstr functions in the gstCamera class to accept a string in addition to the other parameters. Bit of a hack but it works. |
@bmaione Would you be so kind and send me that modified code? |
Thank you for your answer. If you submit the modified code, it will be appreciated. |
@bmaione - can you share your launch string? I have been pulling my hair out on this one. I am trying to use a raw video file (more details on the file at the bottom) This is my gstreamer pipeline (modification I put gstCamera::buildLaunchStr(). Unfortunately, this pipeline finds frames, but it drops a whole lot of them: I was guessing at blocksize = 3 * height * width. Without the blocksize, the screen is green. Other googling led me to the videoparse gstreamer plugin, but I can't seem to install it. It should be a "bad" plugin, and I installed on the TX2 (Jetpack 3.1) them with: I took some video with my video camera. I converted it to a raw video file with this command: It plays fine with: Any help would really be appreciated! |
I finally found a solution using raw video in a file. Gstreamer "bad" plugins need to be installed to use videoparse. The pipeline that worked for me is:
NOTE: to install all the gstreamer plugins, use this command:
|
@bmaione Hi, |
Hi @ @dawnhowe-sync
|
@curiouser001 - there should be 2 sets of mallocs for the video frames. One for the incoming NV12 frames (in gstCamera::checkBuffer) and one set for the RGBA frames (in gstCamera::ConvertRGBA). I don't see a malloc in the log for the incoming frames. It should be 640 x 480 x 1.5 = 460800. (The 1.5 is because it has a depth of 12 bits, which is 1.5 bytes). The RGBA frames need 16 bytes per frame. It should be 640 x 480 x 16 = 4915200 because you need 4 floats per pixel). Your log says that you created 345600 bytes for the incoming frames. It is impossible for me to say what the problem is because you have modified the code so much. I can't figure out how you could possibly get 345600. |
Please attach a copy of a modified gstCamera.cpp that works. I have been trying different options all day with no luck. Thank you. |
Full code gstCamera.cpp You need to change location with your filename video and paste code after: |
Hi.
How can we use the video file instead of the image we took from the camera to implement DetectNet?
The text was updated successfully, but these errors were encountered: