Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support OpenMAX decoding #528

Closed
eternaleye opened this issue Feb 6, 2014 · 17 comments
Closed

Support OpenMAX decoding #528

eternaleye opened this issue Feb 6, 2014 · 17 comments

Comments

@eternaleye
Copy link

AMD is currently working on upstreaming an OpenMAX Gallium State Tracker into Mesa for their video encode support, but it also supports decoding H264 and MPEG2.

Currently, the only supported API to access Mesa's hardware decode support is VDPAU, which does not work under Wayland (and Mesa dropped its VA-API support due to bitrot and lack of maintenance).

OpenMAX doesn't have any such limitation, and would (going forward) allow non-Intel accelerated playback on Wayland. Additionally, such support would be useful on various embedded systems that provide OpenMAX support for their hardware accelerators.

Christian König's git tree for the encode support, including the OpenMAX State Tracker: http://cgit.freedesktop.org/~deathsimple/mesa/log/?h=vce-release

@ghost ghost added enhancement labels Feb 6, 2014
@ghost
Copy link

ghost commented Feb 6, 2014

I don't see this happening, because

  1. ffmpeg doesn't support OpenMAX
  2. OpenMAX probably doesn't fit into our decoding model at all (but I'm not sure how OpenMAX works at all, though)
  3. The only machine that actually used Mesa and to which I had access basically broke down

So, in addition to whether this is easily possible at all, it would need someone working on it.

@ghost
Copy link

ghost commented Feb 6, 2014

PS: I would very much prefer if Wayland actually added vdpau support.

@eternaleye
Copy link
Author

The problem regarding Wayland and VDPAU is less that Wayland doesn't support VDPAU, and more that VDPAU doesn't support Wayland - the API for VDPAU would need an extension to the Window System Integration Layer to support anything other than X11 at all. Considering NVidia's rather lacking enthusiasm when it comes to Wayland, that... doesn't seem hugely likely.

http://http.download.nvidia.com/XFree86/vdpau/doxygen/html/group__api__winsys.html

@eternaleye
Copy link
Author

As far as OpenMAX goes, part of the problem is that there are three different things all called "OpenMAX"

  • OpenMAX AL, which is a high-level interface that would be basically useless for mpv
  • OpenMAX IL, which is lower-level and is (very) rougly comparable to gstreamer elements/pipelines
  • OpenMAX DL, which is lower-level still and implements primitives like the IDCT.

OpenMAX IL is what mpv would be dealing with.

@ghost
Copy link

ghost commented Feb 6, 2014

Well, the windowing API specific parts of the vdpau API are utterly trivial. Even if nvidia comes up with its own Wayland specific APIs after Mesa added them, the fallout due to the API change would probably be pretty small.

If anyone wants to try implementing OpenMAX support - sure.

@ghost
Copy link

ghost commented Feb 6, 2014

By the way, it seems VLC supports OpenMAX: http://git.videolan.org/?p=vlc.git;a=tree;f=modules/codec/omxil

Apparently they need thousands of lines of code for this.

@pigoz
Copy link
Member

pigoz commented Feb 6, 2014

Isn't OpenMAX available through a FFmpeg HWAccel? If not code should be contributed there and mpv should only have 'glue code'.

@ghost
Copy link

ghost commented Feb 6, 2014

Isn't OpenMAX available through a FFmpeg HWAccel?

No. I think OpenMAX IL might be higher level than hwaccel too.

@eternaleye
Copy link
Author

Well, VLC has less code for it than it might seem.

Every single one of those OMX_*.h headers are upstream from Khronos, the standards organization behind OpenGL/OpenMAX/etc - VLC just bundles them.

The android_* stuff is to tie into the Dalvik JVM.

qcom.[ch] is hardware-specific for Qualcomm's tiled buffer format.

A number of the functions in the remaining files are VLC-specific.

etc.

@ghost
Copy link

ghost commented Feb 6, 2014

I see. Maybe it's more feasible than I initially thought. Still needs a developer though.

@giselher
Copy link
Member

giselher commented Feb 6, 2014

I will look at it once mesa has stable release with OpenMAX, because I use mesa and radeon.
There are some other changes on my list which I want to get finished first, but I still have some issues.

@ghost ghost added the low priority label Apr 8, 2014
@Yomi0
Copy link
Contributor

Yomi0 commented Jul 8, 2014

@giselher I'd be willing to help test it once you got started, if you need a tester.

@giselher
Copy link
Member

giselher commented Jul 8, 2014

@Yomi0 thanks for the offer but I actually lost interest.
I wrote a small program that setups the resources and different parts of the api, but stopped before I could test the first decoding sample.

@ghedo
Copy link
Member

ghedo commented Jul 9, 2014

So, uh, I looked into implementing OpenMAX IL support in ffmpeg/libav a while back. In the end I didn't really write any code (I'm still kinda interested in it, but it's not very high priority), but here are some thoughts:

  1. OpenMAX allows chaining different components together (e.g. decoder -> filter -> filter -> output), but most implementations only provide the decoder component, so, in most cases, it would just work like the vaapi-copy hwdec: pass the data to the decoder, get the result back and pass it to the vo (you'd still need to get the buffer back for rendering subtitles and OSD anyway, since only the Raspberry Pi seems to have a subtitles/text rendering component).
  2. It also allows storing a component's output into an EGLImage. Now, I'm not much of a GL expert, so I don't know if this can be useful (e.g. with the wayland vo), but the EGLImage thingy kinda looks like a memory buffer to me, so probably not very helpful.
  3. There is no "standard" library for OpenMAX (like, say, libvdpau or libva). Every platform does as they please, so e.g. on Linux/mesa you'd use libomxil-bellagio (whose last release was in 2011...) plus the mesa drivers, on android there's the libstagefright library (which is already supported by ffmpeg btw), on the Raspberry Pi there's the broadcom library etc... They all provide the OpenMAX IL API, but you have to link to the right one at build-time, or do as VLC does and try to dlopen() all of them at runtime until one succeeds.
  4. Implementations also vary in quality (e.g. the Raspberry Pi library doesn't support component discovery by role, so you'd basically have to hardcode the decoder component's name instead of asking for, say, "decoder.avc"), and it also requires you to call the bcm_host_init() function provided by a different library before you do anything).
  5. This may be just me, but I wasn't even able to implement a simple OpenMAX decoder. The fact that I couldn't find any simple example (neither VLC nor gst-omx qualify) didn't help either (all I could find was code for the Raspberry Pi, using the platform-specific helper functions provided by another rpi library).

A starting point would probably be to implement a simple ffmpeg decoder (without the hwdec stuff) and see how it goes, but as I said I wasn't even able to write a little program for decoding a video so unless some new magical documentation suddenly appears it's gonna take me a whole lot of time to do it.

@ghost
Copy link

ghost commented Nov 4, 2014

It also allows storing a component's output into an EGLImage. Now, I'm not much of a GL expert, so I don't know if this can be useful (e.g. with the wayland vo), but the EGLImage thingy kinda looks like a memory buffer to me, so probably not very helpful.

I believe this would be fine. You can probably use it as texture. (I'm not sure myself - it's all so confusing. Calling this stuff "portable" is just bullshit, since these seem to just provide "concepts", while the details vary by whatever each vendor does.)

@ghost
Copy link

ghost commented Feb 3, 2015

Seems like on wayland, vaapi is generally preferred - though that is probably because many Intel devs work on wayland, not because there's consistent support for it. OMX might be used on phones like Jolla, which are basically built for Android hardware, where OMX is used on the lowest level of the drivers.

Anyway, I asked on #wayland about the status of this, and here's the channel log:

<wm4> what's the status on wayland and hardware video decoding?
* ryan-au has quit (Ping timeout: 264 seconds)
<pq> wm4, works with vaapi and mesa, I think
<wm4> aha
<wm4> last I heard about it, it worked only partially
<wm4> decoding was fine, but output didn't include scaling, and such things
<pq> oh, such details.
<daniels> wm4: wl_scaler will do cropping/
<daniels> *cropping/scaling
<pq> yeah, weston has that
<wm4> is there working opengl interop?
<wm4> is it documented somewhere?
<daniels> wm4: vaapi has a wayland output module which works fine (at least, vaapisink on gstreamer - worked for me about 2 days ago), and there's an RFC dmabuf import protocol which works just fine so far with v4l2 m2m devices at least
<daniels> wm4: what do you mean by opengl interop?
* enitiz (~saurabh@pool-98-117-183-21.bflony.fios.verizon.net) has joined #wayland
<daniels> http://cgit.freedesktop.org/wayland/weston/tree/protocol/scaler.xml <- docs
<wm4> daniels: using a vaapi surface as GL texture, or copying it to a GL texture (without going over CPU)
<daniels> wm4: well, in that case it has nothing to do with wayland, it's just vaapi
<daniels> wm4: and you can ignore the scaler as well, since if you're forcing a blit and colour conversion through gl on the client side (which seems pointless tbh, just adds to memory load), then just do the scale through gl
* mythos has quit (Ping timeout: 264 seconds)
<wm4> I'd prefer to get the raw yuv planes
* dougl (~doug@S0106744401495b56.wp.shawcable.net) has joined #wayland
<daniels> wm4: in that case, you can look at how vaapisink implements it for wayland
<wm4> thanks (so the documentation is "read gstreamer code" lol)
<daniels> https://gitorious.org/vaapi/gstreamer-vaapi/source/bb1b147180a07647f355e1e43453920a6c1db9db:gst-libs/gst/vaapi/gstvaapiwindow_wayland.c
<wm4> (still better than nothing)
<daniels> well yeah, vaapi interop with anything is rubbish tbqh
<daniels> last i heard, vaapi upstream were planning on adding generic dmabuf export, which you could then use in generic protocol when we finalise the dmabuf interface
* acarrico has quit (Ping timeout: 245 seconds)
<daniels> (which hasn't had any comments from any other media players, so i guess it'll just get merged when the current set of changes is finished)
<__gb__> wm4, you should use VPP for hwaccel transfers to dma_buf imported buffers -- that would be a better approach as you can get HQ scaling, deinterlacing et al. along the way
* mgottschlag (~quassel@reactos/tester/phoenix64) has joined #wayland
<daniels> __gb__: doesn't vpp result in an extra pass through the render ring?
<daniels> __gb__: (also, did the dmabuf export api get merged?)
<wm4> __gb__: I thought vpp is fine with vaapi surfaces as input and output
<__gb__> daniels, yes, yes
<daniels> __gb__: ah, cool :)
<wm4> I heard dma_buf works only with newer hw, so I can't even try that
<daniels> __gb__: wasn't sure whether it always did a resolve-to-rgb or not
<__gb__> in some conditions, it could be better to have yuv -> rgb|yuy2 (downscaled) -> display
<__gb__> wm4, that's a kernel feature, not hw
<wm4> __gb__: so it just depends on the kernel and driver support?
<__gb__> though, tbh, I only tested (personally) with kernel >= 3.17 -- and I am pretty sure this can work with earlier kernels
<__gb__> wm4, yes
<wm4> nice
<wm4> this dma_buf thing is basically all I ever wanted (passing surfaces without keeping the crap hwdec api context)
<__gb__> there is still a caveat, vaapi won't support imports from multiple planes/dma_buf forming a single surface
<__gb__> daniel (vetter) suggested the idea of a mega gem buf that wraps multiple sub buffers for that
<__gb__> don't know how far this went
<__gb__> wm4, what do you want in reality? how do you plan to render your buffers?
<wm4> __gb__: doing everything in opengl
<wm4> plus maybe some more "native" way via wayland native APIs
<__gb__> this is hardly useless unless you plan additional 3d effects?
<wm4> (so dedicated hw can be used on weak hw)
<__gb__> s/useless/useful/
<wm4> __gb__: our opengl renderer does its own yuv conversion and scaling, and we want to be able to control all the details
<__gb__> I don't remember about weston state nowadays, there are ways to directly use the display engine too
<__gb__> what kind of details?
<__gb__> the HW has fixed functions to allow flexible scaling/color conversion
<__gb__> i.e. thus not using any EU (shaders) at all
<daniels> wm4: it's not a matter of strong or weak - run large enough content through and you'll strain anyway - plus also a matter of power usage, since if you're slamming the gpu unnecessarily then you _will_ have much lower battery life than the alternatives
<wm4> the exact way yuv is converted to rgb, things like chroma sub pixel position...
<daniels> wm4: you can specify chroma siting and things like narrow/wide yuv clamping, y'know

@ghost ghost added the priority:wontfix label Jul 18, 2015
@ghost
Copy link

ghost commented Jul 18, 2015

Closing this, as OpenMAX is not a valid way to get hw decoding, except on very exotic platforms, or hardware made for Android but running some desktop Linux.

@ghost ghost closed this as completed Jul 18, 2015
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants