continous action (eg. for visualisation, screensaver) of boblight would be nice.... #2

matrix321 opened this Issue Feb 8, 2012 · 13 comments


None yet

5 participants


especially for ProjecM-fans on listening music and watching boblight ;) it would be nice to implement this function.

Memphiz commented Feb 8, 2012

Yeah that would be nice :)


I'd love this too. Shouldn't this xbmc/xbmc#448 have made it possible?

Memphiz commented Jan 16, 2013

No 448 only gives us boblight information during video rendering (thats what this addon uses). This request is not possible atm nor did we find a suitable idea on how to realise it yet. (and we talked alot about it already - so its unlikly that anything you could propose will help us - if you are not really into the XBMC code ...)


Ok. I'd be interested in a technical explanation. A link to the discussion / user group would suffice.
As you can already grab frames from hardware-decoded video (which is the harder part imo) I see no reason why a simple frame grab wouldn't work. Technically (at least with OpenGL which I'm familiar with) there's no reason this is impossible (yes, it's slow).
Wouldn't it even be possible to let the GPU do the screen region->pixel conversion for you, using a custom-generated pixel shader and a streaming VBO to save GPU->system memory bandwith? Grab the context, wait for all renders to finish, render screen-sized quad with pixel shader, read back VBO, swap...


We render a videoframe to a 64x64 pixels image and read that back from opengl, this works quite well because rendering a single videoframe is very fast, however if we want the entire gui we either have to render the entire gui to 64x64 pixels, or copy the backbuffer to a texture, both operations take a lot of resources.


Thanks for the explanation! Obviously re-rendering the GUI would be too slow. I can't imagine copying 4kB from the backbuffer/PBO/FBO? is that slow. I shouldn't take more than a couple of milliseconds max. Is it bacause of the buffer/state changes? If bandwidth is the limiting factor the shader approach might help...


...well. 16k, but anyway ;)
At least the approach would be generic an work in all screens, not just video.


Don't want to be nagging, but I've whipped up some example code and ran some test, which I've summed up here:
Using the right method the impact on the frame rate when reading a 160x90x32bit (56kB) downsampled frame buffer to system memory is pretty low (0.4-1.0ms) on the systems I've tested, especially when the frame rate is not high to begin with. Though I've only tested on Windows and on desktop / notebook hardware I'm pretty sure the impact is manageable on other systems too. Also this would be a user-toggled feature...
I'll port the code to Ubuntu when I find the time and run some tests too to see how it performs on Linux.

I'd love to hear what you think about this.


Oh and regarding extra memory usage: You actually only need the downsampled framebuffer resp. the texture for that, which is not much (I've updated the blog + code to reflect that). Space in system memory for the downsampled data should already have been reserved anyway...
Well, you need OpenGL 3.0 or GL_EXT_framebuffer_blit for glBlitFramebuffer to work, but one could test if it exists and if not instead use the first-render-to-FBO-via-screen-sized-quads approach. That's basically it.


The examples now builds and runs in Windows (WGL), Ubuntu (GLX) and Raspian (EGL). Impact is minimal on most systems as you can see from the benchmark results...


@HorstBaerbel was there any more progress on this? I am guessing you don't know how to integrate your idea into XBMC core?


Correct. There's a discussion about it here too.


Thanks for info!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment