Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video filter for vaapi #246

Closed
wants to merge 8 commits into from
Closed

Video filter for vaapi #246

wants to merge 8 commits into from

Conversation

bylee20
Copy link
Contributor

@bylee20 bylee20 commented Sep 17, 2013

New video filter vf_vaapi is added.
This patch can solve the issue #231 and enhance deinerlacing with vaapi.
Although this version supports bob-deinterlacing only, it will be possible to add other filters vaapi supports later.

@ghost
Copy link

ghost commented Sep 17, 2013

Added some comments. (Click on the code links to see them embedded in context.)

Surfaces management is pretty confusing; that's because the decoder wants all surfaces ever passed to the decoder in advance, at least to my knowledge. This makes things complicated if we want mp_image ref counting just to work. See preallocate_surfaces() in vaapi.c. It probably clashes with what vf_vaapi does in interesting ways.

One thing that is going to be a problem is that just pressing 'D' or using the --deinterlace option won't just enable deinterlacing, unless the user inserted vf_vaapi manually. I don't really have an idea how to fix this. Makes me think that the deinterlacing code should probably be part of the VO or decoder instead. (Putting it in the decoder would be pretty crappy design-wise, but it would probably work quite well.)

@bylee20
Copy link
Contributor Author

bylee20 commented Sep 17, 2013

Passing a context id to identify the caller would be good. Anyway, I'll reconsider about surface management.

About activating deinterlacer, if you want to include deinterlace as a part of VO or decoder, vf_vaapi is not a good idea because it has its own pipeline. Filterings including deinterlacing sould be a batch job and this makes it impossible separate filter from VO or decoder when deinterlacer is included there.

Though there would be some special code for hwdec, how about making a full list of filters which support deinterlacing?

@ghost
Copy link

ghost commented Sep 17, 2013

The question is just: what should happen when the user presses 'D' (or
if --deinterlace is specified, this triggers exactly the same code).

Currently, enabling deinterlacing using the property mechanism in
command.c first tries to use the deinterlace VFCTRLs and VOCTRLs, and
if these don't work, inserts vf_yadif (preferably the lavfi version).
Maybe it would be possible to test whether vaapi hw decoding is enabled
and insert vf_vaapi instead of that is the case. But I'd really like to
avoid additional spoecial casing. Maybe some other solution can be
found?

* rename vf_vaapi to vf_va_vpp
* surface pool for decoding is separated from others
* pts handling in vf_va_vpp
* some redundant codes are removed
@bylee20
Copy link
Contributor Author

bylee20 commented Sep 18, 2013

if these don't work, inserts vf_yadif (preferably the lavfi version).

Why don't you just make a list of deinterlacer filters and try to insert each one instead of just one vf_yadif when VFCTRL/VOCTRL failed? I think it is also possible to generate the deinterlacer list from vf_info list by adding a deinterlace capability member or query function and default options for deinterlacer.

This makes it possible to turn on deinterlace by inserting va_vpp potentially.
Actual on/off of deinterlacing is decided by do_deint member.
@ghost
Copy link

ghost commented Sep 18, 2013

Why don't you just make a list of deinterlacer filters and try to insert each one instead of just one vf_yadif when VFCTRL/VOCTRL failed?

Possible... but still pretty messy. And how would the user control the default interlacer algorithm? vf_vaapi can't take software video frames, so the user can't just put "vf=vaapi:..." in his config. Maybe vf_vaapi should be able to do that?

I think it is also possible to generate the deinterlacer list from vf_info list by adding a deinterlace capability member or query function and default options for deinterlacer.

That's a nice idea, but the result would still be inelegant: vf_lavfi is just a filter wrapper, and you can't handle yadif over vf_lavfi like this. Actually I'm thinking whether I should add additional wrapper filters for some lavfi filters (like yadif and equalizer functions), so this can be handled more elegantly.

@bylee20
Copy link
Contributor Author

bylee20 commented Sep 19, 2013

vf_vaapi can't take software video frames

It should be possible in the same way vo_vaapi renders swdec frames. I'll try to implement it.
It is also possible to make output frame as non-vaapi surface image by downlading and store to memory, but uploading/downloading from GRAM could be a large overhead.
I don't know the reason but when I tested before, in terms of downloading performance, vaGetImage() was much better than vaDeriveImage(). Maybe driver specific issue.
Do you think re-download is useful? or just use vo_vaapi after vf_va_vpp?

vf_lavfi is just a filter wrapper, and you can't handle yadif over vf_lavfi like this.

I think it is possible to make vf_lavfi rebuild the filter graph with yadif when vf_lavfi is initialized without yadif and deinterlacing is requested.

Actually I'm thinking whether I should add additional wrapper filters

Wrapper looks good idea. Possible candidates which could be wrapped are vf_lavfi(or vf_yadif/vf_pp) and vf_va_vpp (if this pull request is merged). For instance, in query_format, if IMGFMT_VAAPI is queried, the wrapper can select vf_va_vpp as a backend.

In fact, I've seen somewhere that mpv will drop all filters which can be handled by lavfi (sorry if I'm wrong). In that case, separated wrapper filter which provides essential and basic functionalities and simple usage is nessesary for users becuase vf_lavfi forces the users to read very long libavfilter reference, although it is a subtle problem to decide what is essential/basic and what is not. And in my opinion, the wrapper filter do not have to be perfect. If it can handle the input images, it may just act as a passthrough filter rather than return error and printing some warning message is enough.

@aktau
Copy link

aktau commented Sep 19, 2013

I don't know the reason but when I tested before, in terms of downloading performance, vaGetImage() was much better than vaDeriveImage(). Maybe driver specific issue.

It seems the libva developers themselves propose using vaGetImage() as well: (not sure what happened here, possibly they optimized vaGetImage() at some point, somehow, because VLC was using it and and it made VLC dog-slow. I tested it and it wasn't better than CPU decoding)

http://comments.gmane.org/gmane.comp.freedesktop.libva/761

Though they do not mention vaDeriveImage() in that post. I found a patch from 1 jan 2013 to VLC in which they try switching to vaDeriveImage() if/when it is available, which is confusing:

https://patches.videolan.org/patch/878/

EDIT: In case there is a software component in a hardware pipeline, might it not be a good idea to at least print this to stdout, so the user knows downloading to RAM is happening which would obviate a large part of the HW-accelerated benefits?

EDIT: Alright, this is a pretty recent comment (jul 2013) where the developers state that they think vaDeriveImage() should be faster, which probably explains why VLC tries it first: http://lists.freedesktop.org/archives/libva/2013-July/001818.html

CONCLUSION: It appears that in 2012, in the staging branch, the libva developers optimized vaGetImage to perform better so that VLC would be "as fast as mplayer-vaapi". However at the beginning of 2013, the VLC developers switched to vaDeriveImage whenever possible. Maybe because the staging branch hadn't been merged yet (which is very possible, the default libva on my debian wheezy is still very slow with VLC), or some other reason.

@ghost
Copy link

ghost commented Sep 19, 2013

It should be possible in the same way vo_vaapi renders swdec frames. I'll try to implement it.

Sounds good. Since vo_vaapi gets the vaapi surfaces directly, there should be no additional overhead involved, and the HW deinterlacer could be used with normal decoding.

It is also possible to make output frame as non-vaapi surface image by downlading and store to memory, but uploading/downloading from GRAM could be a large overhead.

I guess this would negate all advantages of HW decoding, and not worth the trouble. (And I don't know what the hell VLC is doing. I suspect copying back works well for them in some situations, and not so well in others.)

I think it is possible to make vf_lavfi rebuild the filter graph with yadif when vf_lavfi is initialized without yadif and deinterlacing is requested.

It's not that easy. vf_lavfi takes a plain string, and that plain string can contain a complicated filter graph (not just a flat list of filters). It would probably be possible to create a yadif sub-graph, and connect it to the other filters (like the "format" filter in vf_lavfi.c/recreate_graph()), but it gets complicated fast.

I think letting the code which enables deinterlacing try a list of filters would be ok for now, and it wouldn't require that much special casing (vf_va_vpp would just be another entry on the list of deinterlacers). But, assuming vf_va_vpp can input and output software frames (if that would be implemented), there'd be the danger again of forcing inefficient conversions if the wrong filter is inserted.

In fact, I've seen somewhere that mpv will drop all filters which can be handled by lavfi (sorry if I'm wrong). In that case, separated wrapper filter which provides essential and basic functionalities and simple usage is nessesary for users becuase vf_lavfi forces the users to read very long libavfilter reference, although it is a subtle problem to decide what is essential/basic and what is not.

Yep. I'm also thinking about something like that. (Ideally, I'd actually like to throw out all mplayer filter stuff, and just use plain libavfilter. But since libavfilter doesn't allow adding custom filters, which we need for some special things, I'll probably at least keep the mplayer infrastructure.)

All codes related with VASurfaceID are separated from vo_vaapi.c.
Now each of vd/vo/vf can have its own surface pool, or share it.
Two surface pools exist in current impelemtation.
one is for hwdec and the other one is for swdec.
@bylee20
Copy link
Contributor Author

bylee20 commented Sep 19, 2013

Now vf_va_vpp can handle non-hwdec frames, too. By the way, I begin to think the underscore in va_vpp looks ugly... Maybe just vf_vavpp looks better.

I guess this would negate all advantages of HW decoding, and not worth the trouble.

I agree with you. Only swdec->vaapi surface conversion is implemented now.

* remove all global objects
* share va_image_formats object through mp_vaapi_ctx
* change the position of braces
* remove all typedefs
* fix the return value in vf_open
and, va-vpp renamed to vavpp(hyphen and underscore are removed)
@bylee20
Copy link
Contributor Author

bylee20 commented Sep 20, 2013

I have merged the master and tested auto-probing deinterlacer. running mpv with --hwdec=auto --vo=vaapi and activating deinterlacer inserted automatically vavpp in my codes.

@ghost
Copy link

ghost commented Sep 20, 2013

So, va_surface_pool_release() destroys the pool. Also va_surface_pool_clear() is defined like this:

void va_surface_pool_clear(struct va_surface_pool *pool)
{
    for (int i=0; i<pool->num_surfaces; ++i) {
        struct va_surface *s = pool->surfaces[i];
        if (s->p->is_used)
            s->p->is_dead = true;
        else
            va_surface_destroy(s);
    }
    talloc_free(pool->surfaces);
    pool->num_surfaces = 0;
}

When the pool is cleared, and surfaces are still in use, p->is_dead is set to true. Later, pool->num_surfaces is set to 0, and the surface is actually freed as soon as va_surface_destroy() is called. So far so good.

But is there nothing anymore that triggers an assert() if a surface is referenced past the lifetime of the VADisplay? If this situation happens, it'd probably be good to detect this earlier to avoid nasty debugging if someone makes this usage error...

I don't really insist on it, though. If it's too hard or too much work, we can do without this debugging help, and making everything more complicated for it wouldn't be worth the trouble.

@ghost
Copy link

ghost commented Sep 20, 2013

One last issue (but unfortunately a big one): the vaapi postprocessing API is very new, and it won't compile with older libva. So I think we need to disable vf_vavpp if the Libva version is too old. That will need a configure check.

Also, there's the question whether the bob interlacer needs to be added back to the VO, under a ifdef for older libva versions. That's pretty annoying, but maybe needed.

Also, I got this with older libva:

video/vaapi.c:23:6: error: 'VA_FOURCC_RGBX' undeclared here (not in a function)
     {VA_FOURCC_RGBX, IMGFMT_RGBA},
      ^
video/vaapi.c:25:6: error: 'VA_FOURCC_BGRX' undeclared here (not in a function)
     {VA_FOURCC_BGRX, IMGFMT_BGRA},
      ^

Apparently they're new. Also, the mpv equivalent of these is IMGFMT_BGR0, if I understand this right. (No alpha component.) Does vaapi do anything with alpha at all? I know it's used with the OSD overlays, but for video?

If these issues are resolved, we can merge this.

@ghost
Copy link

ghost commented Sep 22, 2013

I reused your branch for another experiment, see vaapi-readback branch. I've made some of the changes I requested, so you can cherry-pick them. I also found a bug.

About the vaapi-readback branch: it appears users actually do want this feature, because it allows using any VO. Apparently using vdpau over libvdpau-va-gl over the vaapi xvba backend (two wrappers!) is fast enough for hw decoding, but too slow for sw decoding. Since mpv has a hard time selecting VO based on codec, this helps us a bit here. It's still an experiment, though.

@aktau
Copy link

aktau commented Sep 22, 2013

About the vaapi-readback branch: it appears users actually do want this feature, because it allows using any VO. Apparently using vdpau over libvdpau-va-gl over the vaapi xvba backend (two wrappers!) is fast enough for hw decoding, but too slow for sw decoding. Since mpv has a hard time selecting VO based on codec, this helps us a bit here. It's still an experiment, though.

It's pretty cool to support it, but I assume that mpv will never accidentally select a path that does readback, no? I'm just saying that because it could lead to VLC-like issues where it definitely says it's using hardware acceleration but it's really slow anyway, and that's caused by readback (I did not know that at the time, and was searching in my own setup to see what was wrong, changing out drivers and whatnot). Another alternative is to plaster a big nice warning when the chosen pipeline implies readback, though it might be difficult to do generically (don't know about that).

@ghost
Copy link

ghost commented Sep 22, 2013

I assume that mpv will never accidentally select a path that does readback, no?

In my branch, it can be explicitly requested with --hwdec=vaapi-copy, and picking another --hwdec entry will never select it. Except --hwdec=auto, which currently can select it.

Another alternative is to plaster a big nice warning when the chosen pipeline implies readback

That's a good idea.

@ghost
Copy link

ghost commented Sep 22, 2013

Also, the user reported that vaapi-copy is good and useful, so maybe we want to merge that into master too.

The current state of xylosper's branch (with my changes for libav compatibility applied) is that it can be merged as-is, but it would also mean that users with a libva version below 0.34.0 can't even do bob deinterlacing.

So the question is: should we add an ifdef'ed block of code to vo_vaapi, that adds back the bob deinterlacing for old libva versions?

@aktau
Copy link

aktau commented Sep 22, 2013

So the question is: should we add an ifdef'ed block of code to vo_vaapi, that adds back the bob deinterlacing for old libva versions?

Probably asking stupid questions here but: did old vaapi even support deinterlacing? Or did it do it automatically? If so, why is it lost with the current changes? If it wasn't vaapi that supported it, then how did it happen?

The thing is, with vaapi-mplayer and mpv master I noticed that interlaced files actually play decently (not near to perfect, but ok'ish, except for some where it's still horribly noticeable), so I thought that maybe it is deinterlacing behind the scenes.

At any rate, there are still a lot of installs with libva 1.0.15 I reckon. Not sure how important they are, as they won't be running a newer mpv without also upgrading their libva/intel-driver I presume.

I do think a suitable #ifdef, if it's not too large, would be nice. It could perhaps be removed in a year or so.

@ghost
Copy link

ghost commented Sep 22, 2013

Probably asking stupid questions here but: did old vaapi even support deinterlacing? Or did it do it automatically? If so, why is it lost with the current changes? If it wasn't vaapi that supported it, then how did it happen?

It supported the most simple deinterlacing algorithms. Probably was enabled by default. The thing is, the API it used was insufficient for more extended stuff, so libva got a big API extension, which vf_vavpp uses. Using the old API would be in the way and perhaps conflict with vf_vavpp badly (at runtime), so xylosper removed it.

@aktau
Copy link

aktau commented Sep 22, 2013

It supported the most simple deinterlacing algorithms. Probably was enabled by default. The thing is, the API it used was insufficient for more extended stuff, so libva got a big API extension, which vf_vavpp uses. Using the old API would be in the way and perhaps conflict with vf_vavpp badly (at runtime), so xylosper removed it.

Right, I understand now. I guess the funny thing is that as of right now, libva can only do bob deinterlacing for all cards that are not haswell (and the temporal/spatial for haswell is pretty damn flaky from what I hear). As I heard now, old vaapi also used a bob deinterlacer, how the times change ;). At least it will be easy to switch to temporal/spatial when it becomes available, as Haihao promised.

I guess the final decision is up to you, such #ifdef's can be nasty to look at, but I haven't actually seen what amount of code this would require to keep in a zombie state. If it's little, I'd vote to keep. As I have full control over the deployment of my systems, I use libva 1.2 and intel-driver 1.2.1 anyway, most of the older versions do a good job of totally locking up the system anyway. Can't have that. So for me personally it's a moot point, my systems are not going to need that #ifdef

@ghost
Copy link

ghost commented Sep 22, 2013

Pushed new stuff to the branch. Includes bug fixes, and restoration of the old deinterlacer support.

I still have to test this on an Intel device (because of the mysterious planes[0] issue). Then, if xylosper agrees (and his tests are successful as well), we can merge this.

There are still some minor issues here and there, but no deal breakers.

@aktau
Copy link

aktau commented Sep 23, 2013

Pushed new stuff to the branch. Includes bug fixes, and restoration of the old deinterlacer support.

Looked at the patch (basically all commits with vaapi: prefix) and it all looks quite nice. Not knowing much about mpv code (and video playing in general), I was surprised how little #ifdef'ing there was, it all looks quite manageable. Very good job. Now we play the waiting game and hope intel does a temporal/spatial deinterlacer soon.

@gbeauchesne mentioned in a xmbc pull request that there could be support in VPP for a pipeline approach, including denoising filter et cetera later. Which would be nice. At least part of it seems to already be in place: http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg17433.html

Some googling actually revealed that motion adaptive deinterlacing is supposed to be enabled on Ivy Bridge as well, crazy. This is a very recent post (13 sept 2013) from the intel devs: http://www.mail-archive.com/libva@lists.freedesktop.org/msg01621.html (EDIT: that seems to be libva staging branch, all things considered I think this is end-of-the-year stuff)

@bylee20
Copy link
Contributor Author

bylee20 commented Sep 23, 2013

Sorry to be late. It was quite busy weekend for me.
@wm4 It seems that the version issues are solved in your branch, right? I'll test with planes[1] and planes[2] and let you know the result.

Apparently they're new. Also, the mpv equivalent of these is IMGFMT_BGR0, if I understand this right. (No alpha component.) Does vaapi do anything with alpha at all? I know it's used with the OSD overlays, but for video?

I think it doesn't at least for intel-driver. I've looked into intel's implementation, they only supports RGBX(without alpha channel). The IMGFMT should be changed as you said.

Pushed new stuff to the branch. Includes bug fixes, and restoration of the old deinterlacer support.

Is the loop at https://github.com/mpv-player/mpv/blob/vaapi-readback/video/out/vo_vaapi.c#L229 really required?
As I commented before, the code just puts two deinterlaced pictures without proper interval. I think the loop is useless without proper pts handling like in vo_vdpau's buffered frames and the 'first-field' is the only available option.

@aktau Current implemetation of vavpp is just a start.
In fact, when I wrote the codes, I've kept the extending functions later in my mind, though I'm not sure I did the right thing. Maybe some changes are also required to do it actually. Anyway, once vavpp is merged, the last jobs can be more easily merged.

@bylee20
Copy link
Contributor Author

bylee20 commented Sep 24, 2013

@wm4 Well, what I care now is only two thing.

One is about planes[0] issue commented at bylee20@e1ca250#video-vaapi-c-P399.
The other one is the previous comment about the loop in old-style deinterlacing.
Maybe this should be solved in separated commit.

I have no time to test and update till today or tommorrow maybe for my offline life. So, if you have no problem with planes[1] or planes[2], I think you'd better just fix it by yourself and merge the branch. I'm just alright with that.

@ghost
Copy link

ghost commented Sep 24, 2013

One is about planes[0] issue commented at bylee20/mpv@e1ca250#video-vaapi-c-P399.

Yes. As I've said, when I've added vaapi support, I had this strange problem that I had to set planes[0] to the surface ID (just like planes[3]), otherwise I'd get image corruption. I've actually tested the new branch on the same hardware (with same libva version, but possibly updated drivers), and it worked fine. So I'm not sure what exactly happened here.

The other one is the previous comment about the loop in old-style deinterlacing.

I don't really care about this. With your work, it's done right if compiled against newer libva versions.

@bylee20
Copy link
Contributor Author

bylee20 commented Sep 24, 2013

Ok then just merge your branch please. In fact, I don't want to pay much attention to legacy codes, too. Thank you for your kind responses and great works.

@aktau
Copy link

aktau commented Sep 24, 2013

I don't really care about this. With your work, it's done right if compiled against newer libva versions.

Alright, but then it's cleaner to take out the #ifdef's and put a minimum version on libva (1.2.0) instead of having non-working zombie code.

@ghost
Copy link

ghost commented Sep 24, 2013

OK, I'll look into merging the branch. I'll squash most of the commits (including my cosmetic fixes), is that ok xylosper?

instead of having non-working zombie code.

Well, it's Intel's code, at least I can blame Intel if it's not correct. But seriously, I have no idea what they thought here. If you mean my non-sense changes, of course I'll restore them to what the code does in git master.

@bylee20
Copy link
Contributor Author

bylee20 commented Sep 24, 2013

Yes. Please do that.

@aktau
Copy link

aktau commented Sep 24, 2013

Well, it's Intel's code, at least I can blame Intel if it's not correct. But seriously, I have no idea what they thought here. If you mean my non-sense changes, of course I'll restore them to what the code does in git master.

Ok, great.

@ghost
Copy link

ghost commented Sep 24, 2013

Made some experiments with the planes[0] issue.

Putting this in va_surface_wrap() works:

    img.planes[0] = (uint8_t*)(uintptr_t)&surface->rt_format; // completely arbitrary, but different for each surface
    img.planes[1] = (uint8_t*)(uintptr_t)surface;
    img.planes[3] = (uint8_t*)(uintptr_t)surface->id;

(va_surface_in_mp_image() has to access planes[1] instead of planes[0] of course.)

While this doesn't (gives heavy artifacts):

    img.planes[0] = "hi";
    img.planes[1] = (uint8_t*)(uintptr_t)surface;
    img.planes[3] = (uint8_t*)(uintptr_t)surface->id;

Really mysterious. Seems like something (libavcodec? libva?) might use planes[0] as part of picture reordering or something similar? I don't get it.

@ghost
Copy link

ghost commented Sep 24, 2013

Updated my local system to a newer libva, and vf_vavpp doesn't work:

[vavpp] vaCreateConfig(): the requested VAProfile is not supported

Seems like just because the VA backend can do bob deinterlacing, it doesn't necessarily support that through VPP.

I'm going to add a crappy hack so that users can enable VO based deinterlacing with a vo_vaapi suboption.

Intel is horrible.

@bylee20
Copy link
Contributor Author

bylee20 commented Sep 24, 2013

Really mysterious. Seems like something (libavcodec? libva?) might use planes[0] as part of picture reordering or something similar? I don't get it.

Mysterious. I don't think the problem is originated from libva side because with a short look, ffmpeg never passes planes[0] to libva.

I'm going to add a crappy hack so that users can enable VO based deinterlacing with a vo_vaapi suboption.

So, we have to keep the old-stype deinterlace even if vf_vavpp is succesfully compiled? what the...

@aktau
Copy link

aktau commented Sep 24, 2013

Seems like just because the VA backend can do bob deinterlacing, it doesn't necessarily support that through VPP.

This really seems exceedingly strange. I'd always read that bob deinterlacing was enabled for SNV/IVB/HSW. What kind of system do you have?

@ghost
Copy link

ghost commented Sep 24, 2013

What kind of system do you have?

nvidia, with the vdpau backend. There's hope this doesn't happen on native Intel systems.

@aktau
Copy link

aktau commented Sep 24, 2013

nvidia, with the vdpau backend. There's hope this doesn't happen on native Intel systems.

Oh, right. I'm building mpv-build as we speak, with a checkout of the vaapi-readback branch. What commandlines did you use exactly?

@ghost
Copy link

ghost commented Sep 24, 2013

mpv file.mkv --vo=vaapi --hwdec=auto --deinterlace should be enough to confirm the fact.

@aktau
Copy link

aktau commented Sep 24, 2013

Wish I could do faster turnarounds though, for local changes I don't need the debian package. But I still want the statically linked local ffmpeg. What setup do you use for that?

@aktau
Copy link

aktau commented Sep 24, 2013

Tested with a video file found here: http://www.auby.no/files/video_tests/

The video played back fine, no errors that I noticed

output:

➜  Downloads  mpv h264_720p_hp_5.1_3mbps_vorbis_styled_and_unstyled_subs_suzumiya.mkv --vo=vaapi --hwdec=auto --deinterl
ace
Playing: h264_720p_hp_5.1_3mbps_vorbis_styled_and_unstyled_subs_suzumiya.mkv
Detected file format: Matroska
Clip info:
 TITLE: The Melancholy of Haruhi Suzumiya: Special Ending
[stream] Video (+) --vid=1 'The Melancholy of Haruhi Suzumiya: Special Ending' (h264)
[stream] Audio (+) --aid=1 --alang=jpn (*) '2ch Vorbis' (vorbis)
[stream] Subs  (+) --sid=1 --slang=eng (*) 'Styled ASS' (ass)
[stream] Subs      --sid=2 --slang=eng 'Styled ASS (Simple)' (ass)
[stream] Subs      --sid=3 --slang=eng 'Plain SRT' (subrip)
libva info: VA-API version 0.34.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_34
libva info: va_openDriver() returns 0
Trying to use hardware decoding.
Selected video codec: H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 [lavc:h264]
Selected audio codec: Vorbis [lavc:vorbis]
AO: [alsa] 48000Hz stereo 2ch floatle
VO: [vaapi] 704x480 => 853x480 vaapi 
AV: 00:00:00 / 00:01:11 (0%) A-V:  0.001
Opening video filter: [lavfi graph=yadif]
Could not find matching colorspace - retrying with -vf scale...
Opening video filter: [scale]
The selected video_out device is incompatible with this codec.
Try appending the scale filter to your filter list,
e.g. -vf filter,scale instead of -vf filter.
Attempted filter chain:
 [scale] ??? -> ???
 [lavfi] ??? -> ???
 [vo] ???
VO: [vaapi] 704x480 => 853x480 vaapi 
Opening video filter: [yadif]
Could not find matching colorspace - retrying with -vf scale...
Opening video filter: [scale]
The selected video_out device is incompatible with this codec.
Try appending the scale filter to your filter list,
e.g. -vf filter,scale instead of -vf filter.
Attempted filter chain:
 [scale] ??? -> ???
 [yadif] ??? -> ???
 [vo] ???
VO: [vaapi] 704x480 => 853x480 vaapi 
Opening video filter: [vavpp]
VO: [vaapi] 704x480 => 853x480 vaapi 
AV: 00:01:08 / 00:01:11 (94%) A-V:  0.000


Exiting... (End of file)

BTW: the styled subs did not work on mplayer, but rendered beautifully on mpv, great job with that! (it's probably also the reason that the avg CPU usage of mplayer for that clip is 4%, while mpv's is 7%)

ghost pushed a commit that referenced this pull request Sep 24, 2013
Merged from pull request #246 by xylosper. Minor cosmetic changes, some
adjustments (compatibility with older libva versions), and manpage
additions by wm4.

Signed-off-by: wm4 <wm4@nowhere>
@ghost
Copy link

ghost commented Sep 24, 2013

@xylosper: I pushed the branch vaapi-vavpp. This is exactly the same as vaapi-readback, except most commits are squashed. Is this ok to merge?

@ghost
Copy link

ghost commented Sep 24, 2013

@aktau: the video filter insertion is a bit "bumpy", because it tries yadif first. But apart from that, it appears to be working for you?

@aktau
Copy link

aktau commented Sep 24, 2013

@wm4 The video displays fine. However, what I would really need is a properly interlaced video with fast-moving images. So that I can see if the filter is actually doing its job. I have one somewhere on one of my machines but I can't find it...

@aktau
Copy link

aktau commented Sep 24, 2013

@wm4 @xylosper

I just found a good example of an interlaced video, here's the results:

  • without --deinterlace: video plays back fine but I can clearly see the combing effect, even when the actor moves a little
  • with --deinterlace: video plays back fine and the combing effect is gone

So I can safely say: good job, everything seems cool

EDIT: I can also say that the old code (both mpv and mplayer) with deint=2 (supposedly bob deinterlacing) is actually very bad here. The video gets choppy and even if the combing effect seems to be gone, everything just bobs up and down all the time (very bad bob deinterlacing, this). So this patch is a marked improvement in image quality.

@aktau
Copy link

aktau commented Sep 24, 2013

Actually I would go so far as to say that the old way of deinterlacing is practically useless, I think it might provoke epilepsy.

@ghost
Copy link

ghost commented Sep 24, 2013

That's probably caused by that loop we were talking about. (lol)

@ghost
Copy link

ghost commented Sep 24, 2013

OK, I just decided to remove this dumb loop and pushed a commit to vaapi-vavpp.

@bylee20
Copy link
Contributor Author

bylee20 commented Sep 25, 2013

@wm4 Yes. It's okay for me. By the way, may I remove my branch which I made just for this pull request?

ghost pushed a commit that referenced this pull request Sep 25, 2013
Merged from pull request #246 by xylosper. Minor cosmetic changes, some
adjustments (compatibility with older libva versions), and manpage
additions by wm4.

Signed-off-by: wm4 <wm4@nowhere>
@ghost
Copy link

ghost commented Sep 25, 2013

OK, merged. Thanks for your work!

By the way, may I remove my branch which I made just for this pull request?

Yes, it's not needed anymore.

@ghost ghost closed this Sep 25, 2013
@bylee20 bylee20 deleted the upstream branch September 26, 2013 01:17
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants