You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to render pictures decoded from a video stream via FFmpeg using SDL.
If I use a software decoder (by default), then I can upload the planes via SDL_UpdateYUVTexture(), and the texture contains the video frame. However, if I use a hardware decoder, then the picture data is not available in main memory.
I could use av_hwframe_transfer_data() to get the frame data (like the FFmpeg sample hw_decode.c does to write raw data to a file), but this is very costly: the data is downloaded then re-uploaded, and may take tens of milliseconds.
Does/could SDL provide a way to render the decoded hardware picture directly?
For example, in VLC, the hardware picture is sent to the video output. For the OpenGL video output, an "interop" uploads the hardware picture directly to OpenGL textures, which are then used from a shader to render with the correct chroma. Several interop implementations are provided (for example VAAPI, VDPAU…).
However, the "hardware picture" contains some private "hardware context", which is possible because VLC handles both the decoding and the rendering. I don't know how SDL could receive an "hardware picture" in a generic way.
This feature would allow very fast decoding + rendering, basically a very constant ~1ms between when the encoded H.264 packet is received and when when SDL_RenderPresent() returns. Currently, I use software decoding then SDL_UpdateYUVTexture(), and while decoding time is correct on average (~4ms), it is very variable and there are spikes at 20ms (and it consumes more CPU).
The text was updated successfully, but these errors were encountered:
I want to render pictures decoded from a video stream via FFmpeg using SDL.
If I use a software decoder (by default), then I can upload the planes via
SDL_UpdateYUVTexture()
, and the texture contains the video frame. However, if I use a hardware decoder, then the picture data is not available in main memory.I could use
av_hwframe_transfer_data()
to get the frame data (like the FFmpeg samplehw_decode.c
does to write raw data to a file), but this is very costly: the data is downloaded then re-uploaded, and may take tens of milliseconds.Does/could SDL provide a way to render the decoded hardware picture directly?
For example, in VLC, the hardware picture is sent to the video output. For the OpenGL video output, an "interop" uploads the hardware picture directly to OpenGL textures, which are then used from a shader to render with the correct chroma. Several interop implementations are provided (for example VAAPI, VDPAU…).
However, the "hardware picture" contains some private "hardware context", which is possible because VLC handles both the decoding and the rendering. I don't know how SDL could receive an "hardware picture" in a generic way.
This feature would allow very fast decoding + rendering, basically a very constant ~1ms between when the encoded H.264 packet is received and when when
SDL_RenderPresent()
returns. Currently, I use software decoding thenSDL_UpdateYUVTexture()
, and while decoding time is correct on average (~4ms), it is very variable and there are spikes at 20ms (and it consumes more CPU).The text was updated successfully, but these errors were encountered: