Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making sure the last recorded frame is actually the last one I get in the output #513

Closed
RobertLucian opened this issue Oct 4, 2018 · 5 comments
Labels

Comments

@RobertLucian
Copy link
Contributor

I've got a time-sensitive mechanism that needs to be sure the last frame it got is actually the last one that got recorded until that moment. I'm using the start_recording method and I'm passing an output object of a custom class I wrote that has implemented write and flush methods.

The write method resembles this (it's pseudocode):

def write(self, frame):
    metadata = get_state_of_current_frame()
    queue.push([metadata, frame])
    
    change_targeted_state_of_next_frame()

Basically, I'm changing the state of what the camera sees (like showing a unicorn instead of a bear in the next frame) for the next frame that it records and sends it as a parameter to the write method. What I need is the assurance that in the next frame I'll be seeing whatever the change_targeted_state_of_next_frame() function wants it to see in the next one. Is this possible in this setting?

Hopefully, I've been as clear as possible 😃

Thank you!

@waveform80
Copy link
Owner

Sorry, I don't think that's possible from the layer you're in.

In the recording pipeline that picamera sets up there's a set of buffers that each component uses to ensure things run smoothly without stalling. The firmware's defaults are sufficiently large to ensure that a write callback can take a little longer (not uncommon for things like SD card output) without preventing the camera from having a buffer available to capture another frame. You don't have control over this from the picamera interface, unless you go down to the mmalobj layer and start allocating all the components and buffers yourself (I don't think even picamera specifies the buffer counts - just uses the firmware's defaults).

Anyway, all this means that by the time a buffer reaches your write method it's pretty much guaranteed not to be the latest one captured, because for that to be the case there couldn't be any spare buffers in the pipeline and if your write method took too long it'd stall the camera from capturing the next frame. Sorry!

@RobertLucian
Copy link
Contributor Author

I also tried getting a signal from the camera in an attempt to synchronize the frames (that was before you replied here) and I gotta say you were right. This is what I tried:
https://www.raspberrypi.org/forums/viewtopic.php?t=190314

Anyway, as a final option, I resorted to using the capture_sequence (with use_video_port=True) method and increase the framerate to reduce the time it takes to return the output. The maximum I'm able to get while I do the "synchronization" is ~6.7 FPS. Of course, if I try to capture frames one after the another it's much faster, but in the case where I need to be sure of what I get, this is what I'm getting. Luckily, 6.7 is pretty much enough for the application I'm working on, but it'd have been cool to do it at 20 or 30. (long story short I'm switching some LEDs to different colors and I'm validating their response with what they should get with the camera).

Thanks for your help, Dave. I think it's all clear for me now.

@6by9
Copy link
Collaborator

6by9 commented Oct 8, 2018

As waveform80 says, you're at the end of a pipeline, so it's a touch tricky to judge the latency of a buffer through the system.

As a rough estimate for 1080P, the frame takes whatever exposure time is programmed on the sensor, and then ~31ms to read out.
As long as you aren't transposing then the ISP starts processing the frame as soon as it has some lines to work on, so completes a few milliseconds after the last lines are in from the sensor.
For either H264 or MJPEG encoding it only starts the frame once the whole thing is available. H264 takes about 40ms to process (two frames can be in the pipe at a time as CABAC/CAVLC is independent of the motion estimation phase).
Delivering the frame to the application is then fairly minimal.

I recall there was a forum thread where I actually measured the numbers - ah https://www.raspberrypi.org/forums/viewtopic.php?t=153410&p=1027417#p1004792. You don't state your resolution, but your 6.7fps would be 149ms would be about right for 1080P with a 30ms exposure time and a small safety margin.

Can you work the other way around? Every buffer has a timestamp. You can retrieve the current system time from the GPU (I forget the call), therefore at point X you can read the system time and get the current value. All buffers up to that timestamp will be before your change.

@RobertLucian
Copy link
Contributor Author

So right now the resolution I'm using is 480x272 and the exposure time is set to 3ms - the LEDs I'm testing are bright enough to compensate for the lack of exposure - this has the advantage of filtering out nearby lights. So theoretically the latency could be estimated at ~(31 + 3)ms.

For the time being, 6.7 frames/s are enough. As for retrieving the timestamps from the buffers, I'm gonna leave this for later on when getting more out of it is going to be necessary - this is anyway a good idea that's worth trying out.

Thank you 6by9.

@waveform80
Copy link
Owner

Closing for now; do feel free to re-open if you've further questions about this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants