Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Take into account displayable framebuffers when flushing #32

Merged
merged 1 commit into from
Feb 9, 2020

Conversation

dougnazar
Copy link
Contributor

Without keeping proper track of the available framebuffers
num_used_framebuffers can wrap, causing the library to always think
it can decode a frame, but the VPU fails, eventually leading to a
pipeline abort.

Signed-off-by: Doug Nazar nazard@nazar.ca

@dougnazar
Copy link
Contributor Author

I'd been seeing this issue for a long time but assumed it was due to the crappy format most of our files were in. I recently started re-encoding everything and started looking into it. Failure mode would be several thousand

imxvpuapi imxvpuapi_vpulib.c:2182:imx_vpu_dec_decode: not enough output framebuffers were available

followed by

imxvpuapi imxvpuapi_vpulib.c:1744:imx_vpu_dec_push_input_data: could not update bitstream buffer with new data: invalid parameters imxvpudecoder decoder.c:605:gst_imx_vpu_decoder_handle_frame:<imxvpudecoder49> failed to decode: invalid params

which would eventually cause the demuxer to throw an Internal data stream error. Spent too much time thinking there was an issue in the demuxer.

Looks like this is also in v2, but I haven't had a chance to test it.

@dv1
Copy link
Collaborator

dv1 commented Aug 4, 2019

Good catch, though I think that it makes more sense to also flush the displayable frames. After all, we are talking about a "flush" function. It is supposed to flush everything there is. So, maybe, instead of adding a check for displayable frames, it would be better to change the existing if-block to check that the mode isn't FrameMode_Free. That way, the rest of the block will clear the frame properly, though I am currently not sure if vpu_DecGetOutputInfo() also needs to be called if there's a displayable frame.

@dougnazar
Copy link
Contributor Author

Unless I'm missing something, with the exception of available_decoded_frame_idx, the displayable frames are "owned" by the user until imx_vpu_dec_mark_framebuffer_as_displayed() is called. In the case of gstreamer they are still in the pipeline awaiting display.

Usually the issue is masked since 99% of the time there was only a single frame in flight, and the decoder would grab a frame after the flush, but before the first was released so the counter would never wrap. However if there was ever more than one in flight it could wrap.

The flush probably should handle the available_decoded_frame_idx case, just for robustness. I didn't at the time, since gstreamer always calls imx_vpu_dec_get_decoded_frame() after imx_vpu_dec_decode().

@dv1
Copy link
Collaborator

dv1 commented Aug 4, 2019

Hm, good point. Then the best approach is to do it your way, and add an exception to the flush documentation which states that framebuffers that haven't been marked as displayed yet are still not marked as such.

Also, this is not an issue in v2 with the i.MX6 (since there, it no longer exports the framebuffers from the CODA VPU pool), but it may be with the i.MX8. Will look into that.

Without keeping proper track of the available framebuffers
num_used_framebuffers can wrap, causing the library to always think
it can decode a frame, but the VPU fails, eventually leading to a
pipeline abort.

Signed-off-by: Doug Nazar <nazard@nazar.ca>
@dougnazar
Copy link
Contributor Author

Yeah, I see the copy in imx_vpu_api_dec_get_decoded_frame(). I'd only briefly checked the flush function before. Oops ;-)

Added the clearing of any available_decoded_frame_idx frames, an assert to make sure we don't wrap & a note about the flushing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants