Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming from the net introduces latency or requires separate thread #1

phoboslab opened this issue Aug 8, 2019 · 1 comment


Copy link

@phoboslab phoboslab commented Aug 8, 2019

When feeding a plm_buffer() from the network (or slow media) there's no way to tell the buffer that no data is available at this time, but may be available later.

This forces you to either only decode a frame whenever you are sure that the data for the whole frame is available (as the video decoder can't pause in the middle of a frame) or to run the decoder in a separate thread and busy wait in the plm_buffer_callback until more data is available.

Making sure that enough data is available for decoding a full frame is not straight forward, because we don't know the size of a frame until it has been fully decoded, or we find the PICTURE_START code of the next frame. This introduces unnecessary latency for streaming.

The problem is described in more detail in this blog post towards the end.

This issue is meant for discussion of the problem and possible solutions.

Copy link

@andyjpb andyjpb commented Aug 8, 2019

Perhaps a non-blocking-io pattern would be useful here. The demuxer can return either a packet or EAGAIN. If the decoder gets EAGAIN from the demuxer then it does nothing and waits to be called again.
This would probably involve changing the interface for the decoder as well so that the user can ask for a frame if one is ready or they can ask the decoder to block until the next frame is ready.

I guess that the wire format doesn't include the data size because doing so would force a frame of latency at the encoder whereas this way the decoder has more implementation options.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants