Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can one access both frames from the OPQV-dual stream and output the difference in real time? #715

Open
leoshmu opened this issue Feb 1, 2022 · 2 comments

Comments

@leoshmu
Copy link

leoshmu commented Feb 1, 2022

In read the docs, there is mention of Output 1, which stores pointers to two subsequent frames, and this is used to estimate motion https://picamera.readthedocs.io/en/release-1.13/api_mmalobj.html#opaque-format

I'm wondering if it's possible to access these two frames in real time and output the difference between them? I have been able to use the hardware pulse to create a workable LED strobe effect where one frame has LED on and the next has LED off, and I'd love to be able to efficiently take the difference between those frames. OpenCV certainly is one way to do this, but naively it seems that if I had access to the 2 frames already stored by the dual-frame format then I'd be able to much more quickly obtain the signal I need.

I haven't found any examples of explicitly using the 2 frames from the dual-frame format, any help is appreciated!

@6by9
Copy link
Collaborator

6by9 commented Feb 1, 2022

It's not subsequent frames, it's different resolutions of the same frame.

The H264 codec needs to do a motion search, so it does a coarse search against a lower resolution version of the image to get roughly the right candidate, and then a fine search to refine that.

And no, there is no way to unpack the MMAL_ENCODING_OPAQUE buffers from the ARM side. It includes a handle to the buffer, but that uses a lookup which only exists on within the firmware.

@leoshmu
Copy link
Author

leoshmu commented Feb 1, 2022

Thank you so much - that makes sense.
I am doing something simple with python and cv2, keeping track of the prior frame and doing an absolute value subtraction for each frame.
I wonder if doing something with custom encoders would be a better solution, would greatly value your input but I understand if this is beyond the scope of the current question and if you don't have time to consider it!

# capture  and prior_frame are initialized to be numpy arrays that are the size of the frame
for frame in camera.capture_continuous(capture, use_video_port_true, format = 'bgra'):
  frame_diff = cv2.absdiff(frame, prior_frame)
  # can display or further process that frame_diff
  prior_frame = frame.copy()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants