New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running 2 EGL contexts at once #465
Comments
And now it looks like the funky results are a different issue (maybe?). But the eglSwapBuffers failure still seems to happen. It's like when the 2nd EGL overlay (used for subtitles) is destroyed, the 1st overlay also stops working. When the eglSwapBuffers call fail, eglGetError returns EGL_NOT_INITIALIZED. |
Could it be that using 2 EGL contexts at the same time uses a LOT of energy? The funky behavior I'm seeing might be due to the HDMI signal dropping in some way, and when this happens the power LED doesn't even bother blinking. |
What do you mean by different screens? |
Sorry, different threads. On one screen.
All dispmanx elements are fullscreen, both source and destination. From background to foreground:
Results seemed better than with dispmanx, but when the UI was added on top, everything got trashed, unfortunately. |
Which refers to trying to render subtitles with various dispmanx elements. |
The HVS (hardware video scaler) is a real time resource that needs to fetch from memory, format convert, scale and composite the layers. If you keep adding layers then eventually you reach a limit in context memory (used for vertical resizing), display list memory (linked list of all elements), sdram bandwidth or pixel procesing bandwidth. If you add |
What's the ideal way to render video, subtitles, and UI? It would be nice if there was a way to map a mmal surfaces as texture. Then everything could just be done on a single EGL context (it wouldn't have to be transparent either, so the system would have only 1 dispmanx window). But apparently no such support exists.
It helps a lot. While I'm not sure about performance, it removes the funkiness and the HDMI instability. But now I get tearing. |
(Also, I guess the HDMI funkiness happens due to the scaler process missing its deadline...) |
I'd do it like Kodi. One mmal video layer with one EGL layer on top (largely transparent). You can update the video and EGL layers asynchronously, so subs may only cause the overlay to change every couple of seconds, but the video will be running at 24 fps. |
With the background layer avoided, that's just a difference of 1 dispmanx layer. Why does one additional layer kill it? Both EGL layers are mostly static too. Do EGL have an additional intrinsic cost to the compositor? Making the subtitle and UI renderer share the same context would be somewhat involved for me, because UI and video renderer live in separate threads. Only the OpenGL renderer has special API that allows it to be used from a different thread. Even if I use the OpenGL renderer, add special hack that disables the actual video rendering and uses a MMAL overlay instead, eglSwapBuffer would still have to be called every frame due to the way the API works. The UI API for which this was mostly developed (Qt) has the same requirement: you can't just "skip" rendering. This would mean subtitles would be rendered again on every frame, which doesn't sound ideal. Also, why does the other EGL context become apparently unusable when one EGL context is destroyed? This could very well be my own fault, so a confirmation that this does not normally happen would be nice. |
Vertically resized layers require a number of lines of full width context. You can have a maximum 3 such layers at 1920 pixels wide. Kodi does use a second egl context for submitting jpeg textures off the main thread (https://github.com/xbmc/xbmc/blob/master/xbmc/cores/omxplayer/OMXImage.cpp#L389) and I've not had an issue doing this. |
With the background layer gone, it performs amazingly better. (I also made sure the console framebuffer is gone.)
This still happens, and I'm not sure why. I'm sure there's no HDMI mode change. Creating an EGL context and showing it on the screen doesn't break it either. I need to do more tests. |
OK, it was because I called eglTerminate. I should have read the fine print - there's only one global EGL display, so of course terminating it trashes other EGL contexts in the process as well. Sorry for the trouble! |
@wm4 i know it's an old issue but maybe you can help me setting this up correctly. you wrote:
can you give me an example of how you achieved this, dispmanx and egl-surface creation would be the most interesting bits. the layering i got right after some tinkering, now i'm stuck creating a proper transparent surface and swapping it to the frame buffer. it still shows black as background color. |
There's some code, but most GL code is very abstracted: https://github.com/mpv-player/mpv/blob/master/video/out/vo_rpi.c#L391 |
hey thanks. i'm looking into that. just want to confirm: your osd-window's bg is transparent on rpi, right? |
Yes. |
Is it possible to create 2 separate EGL contexts on 2 different screens, each with their own dispmanx displays and windows? They're supposed to be transparent and overlay each others.
I tried this, and while EGL-overlay-over-MMAL-output works just fine, I'm getting funky results if another EGL windows is overlaid. I get visual corruption (like flickering) and eventually everything goes to hell with eglSwapBuffers apparently frequently failing.
The text was updated successfully, but these errors were encountered: