Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running 2 EGL contexts at once #465

Closed
ghost opened this issue Aug 20, 2015 · 17 comments
Closed

Running 2 EGL contexts at once #465

ghost opened this issue Aug 20, 2015 · 17 comments

Comments

@ghost
Copy link

ghost commented Aug 20, 2015

Is it possible to create 2 separate EGL contexts on 2 different screens, each with their own dispmanx displays and windows? They're supposed to be transparent and overlay each others.

I tried this, and while EGL-overlay-over-MMAL-output works just fine, I'm getting funky results if another EGL windows is overlaid. I get visual corruption (like flickering) and eventually everything goes to hell with eglSwapBuffers apparently frequently failing.

@ghost
Copy link
Author

ghost commented Aug 20, 2015

And now it looks like the funky results are a different issue (maybe?). But the eglSwapBuffers failure still seems to happen. It's like when the 2nd EGL overlay (used for subtitles) is destroyed, the 1st overlay also stops working. When the eglSwapBuffers call fail, eglGetError returns EGL_NOT_INITIALIZED.

@ghost
Copy link
Author

ghost commented Aug 20, 2015

Could it be that using 2 EGL contexts at the same time uses a LOT of energy? The funky behavior I'm seeing might be due to the HDMI signal dropping in some way, and when this happens the power LED doesn't even bother blinking.

@popcornmix
Copy link
Contributor

What do you mean by different screens?
Can you describe the list of dispmanx overlays in use and source/dest dimensions and the type of display they are output to?

@ghost
Copy link
Author

ghost commented Aug 20, 2015

What do you mean by different screens?

Sorry, different threads. On one screen.

Can you describe the list of dispmanx overlays in use and source/dest dimensions and the type of display they are output to?

All dispmanx elements are fullscreen, both source and destination. From background to foreground:

  1. Background (DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS), screen sized source
  2. MMAL renderer (I suppose this doesn't use dispmanx directly)
  3. Subtitle renderer (DISPMANX_FLAGS_ALPHA_FROM_SOURCE), with an EGL context
  4. UI (DISPMANX_FLAGS_ALPHA_FROM_SOURCE), with an EGL context

Results seemed better than with dispmanx, but when the UI was added on top, everything got trashed, unfortunately.

@ghost
Copy link
Author

ghost commented Aug 20, 2015

Results seemed better than with dispmanx,

Which refers to trying to render subtitles with various dispmanx elements.

@popcornmix
Copy link
Contributor

The HVS (hardware video scaler) is a real time resource that needs to fetch from memory, format convert, scale and composite the layers. If you keep adding layers then eventually you reach a limit in context memory (used for vertical resizing), display list memory (linked list of all elements), sdram bandwidth or pixel procesing bandwidth.

If you add dispmanx_offline=1 to config.txt then composition will be done to an offscreen buffer, possibly in multiple passes. That will allow more complex display lists, but will reduce performance. Firstly does that option make a difference?

@ghost
Copy link
Author

ghost commented Aug 20, 2015

The HVS (hardware video scaler) is a real time resource that needs to fetch from memory, format convert, scale and composite the layers. If you keep adding layers then eventually you reach a limit in context memory (used for vertical resizing), display list memory (linked list of all elements), sdram bandwidth or pixel procesing bandwidth.

What's the ideal way to render video, subtitles, and UI? It would be nice if there was a way to map a mmal surfaces as texture. Then everything could just be done on a single EGL context (it wouldn't have to be transparent either, so the system would have only 1 dispmanx window). But apparently no such support exists.

If you add dispmanx_offline=1 to config.txt then composition will be done to an offscreen buffer, possibly in multiple passes. That will allow more complex display lists, but will reduce performance. Firstly does that option make a difference?

It helps a lot. While I'm not sure about performance, it removes the funkiness and the HDMI instability. But now I get tearing.

@ghost
Copy link
Author

ghost commented Aug 20, 2015

(Also, I guess the HDMI funkiness happens due to the scaler process missing its deadline...)

@popcornmix
Copy link
Contributor

What's the ideal way to render video, subtitles, and UI?

I'd do it like Kodi. One mmal video layer with one EGL layer on top (largely transparent).
For best performance kill the console framebuffer (you can use fbset -xres 1 -yres 1 -vxres 1 -vyres 1 to make it virtually free), so there are just two overlays present.

You can update the video and EGL layers asynchronously, so subs may only cause the overlay to change every couple of seconds, but the video will be running at 24 fps.

@ghost
Copy link
Author

ghost commented Aug 20, 2015

I'd do it like Kodi. One mmal video layer with one EGL layer on top (largely transparent).

With the background layer avoided, that's just a difference of 1 dispmanx layer. Why does one additional layer kill it? Both EGL layers are mostly static too. Do EGL have an additional intrinsic cost to the compositor?

Making the subtitle and UI renderer share the same context would be somewhat involved for me, because UI and video renderer live in separate threads. Only the OpenGL renderer has special API that allows it to be used from a different thread. Even if I use the OpenGL renderer, add special hack that disables the actual video rendering and uses a MMAL overlay instead, eglSwapBuffer would still have to be called every frame due to the way the API works. The UI API for which this was mostly developed (Qt) has the same requirement: you can't just "skip" rendering. This would mean subtitles would be rendered again on every frame, which doesn't sound ideal.

Also, why does the other EGL context become apparently unusable when one EGL context is destroyed? This could very well be my own fault, so a confirmation that this does not normally happen would be nice.

@popcornmix
Copy link
Contributor

Vertically resized layers require a number of lines of full width context. You can have a maximum 3 such layers at 1920 pixels wide.
Did you remove the framebuffer console?
I'm not aware of EGL context being destroyed when you destroy another context. You will likely lose it if you close the display (e.g. a hdmi mode change).

Kodi does use a second egl context for submitting jpeg textures off the main thread (https://github.com/xbmc/xbmc/blob/master/xbmc/cores/omxplayer/OMXImage.cpp#L389) and I've not had an issue doing this.

@ghost
Copy link
Author

ghost commented Aug 20, 2015

Vertically resized layers require a number of lines of full width context. You can have a maximum 3 such layers at 1920 pixels wide.

With the background layer gone, it performs amazingly better. (I also made sure the console framebuffer is gone.)

I'm not aware of EGL context being destroyed when you destroy another context. You will likely lose it if you close the display (e.g. a hdmi mode change).

This still happens, and I'm not sure why. I'm sure there's no HDMI mode change. Creating an EGL context and showing it on the screen doesn't break it either. I need to do more tests.

@ghost
Copy link
Author

ghost commented Aug 31, 2015

OK, it was because I called eglTerminate. I should have read the fine print - there's only one global EGL display, so of course terminating it trashes other EGL contexts in the process as well.

Sorry for the trouble!

@ghost ghost closed this as completed Aug 31, 2015
@krisrok
Copy link

krisrok commented Oct 20, 2017

@wm4 i know it's an old issue but maybe you can help me setting this up correctly. you wrote:

Subtitle renderer (DISPMANX_FLAGS_ALPHA_FROM_SOURCE), with an EGL context

can you give me an example of how you achieved this, dispmanx and egl-surface creation would be the most interesting bits.

the layering i got right after some tinkering, now i'm stuck creating a proper transparent surface and swapping it to the frame buffer. it still shows black as background color.

@ghost
Copy link
Author

ghost commented Oct 20, 2017

There's some code, but most GL code is very abstracted: https://github.com/mpv-player/mpv/blob/master/video/out/vo_rpi.c#L391

@krisrok
Copy link

krisrok commented Oct 20, 2017

hey thanks. i'm looking into that. just want to confirm: your osd-window's bg is transparent on rpi, right?

@ghost
Copy link
Author

ghost commented Oct 20, 2017

Yes.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants