Facilitate HDMI frame packing for 3D output #1945
Comments
|
Uh, waiting for that command line. |
|
command line added :-) |
|
This lavfi graph should work via the mpv lavfi filter too. |
|
Yes, it does work like this:
This is, of course, not cheap on the CPU (about 0.7 i7 cores) - "perf top" shows these non-surprising top contributors to CPU usage:
For systems where I'm not sure whether CPU usage could be much improved upon by using a more dedicated method than using this filter chain. But it's at least good to know it generally works this way. |
|
Possibly it could be done with a vo_opengl shader (maybe even user shaders, as soon as we get them). |
|
We now have user shaders. |
|
Sounds interesting! I have to admit that as of yet, I am not proficient in OGSL - the last time I did some X11 graphics driver programming, simple latches provided "state of the art" 2D acceleration, and the Amiga's Blitter was top-notch high tech :-) Can you refer to some OGSL sample that would be a good starting point to derive an "insert 45 blank lines" shader script? |
|
Other than the hints in the manpage, these are the only examples I know of: https://github.com/haasn/gentoo-conf/tree/master/home/nand/.mpv/shaders |
|
Hmm. I looked at these examples, all of which define a "sample()"-function doing something, then tried to figure out from http://www.opengl.org/registry/doc/GLSLangSpec.4.50.pdf when / in what part of the OpenGL pipeline this "sample()" function will be called and with what parameters. But the reference document doesn't mention any "sample()" function. So I'm somewhat lost where to read on... |
Doubtful. User shaders have zero control over the output size - at best you could do something like using vf_expand to add 45 extra black lines at the bottom, and then use a user shader to move the bottom half of the image downwards. But it would not exactly be an ideal solution. The best place to do something like this would probably be inside vo_opengl itself. Or perhaps somebody can come up with a good way to add “change the output size” support to the user shader API.
It's not part of OpenGL. It's something we invented. The parameters and what they mean are all documented in the man page. |
Huh? I thought letting the user have control over the output was a central point of the user shaders. |
No - it was considered, but I never came across a decent way to specify an API for this kind of thing. (It would probably require embedding some sort of mathematical expression language as well) The main point of the current API is letting you modify the output - eg. deband, denoise, sharpen, grayscale - that kind of stuff. Basically anything that preserves the frame size but just adds some sort of post processing can be done currently. The problem here is that we want to actually rescale the video from (w,h) to (w,h+45) while running our shader. Note: There are conceptual issues with this particular use case as well, it's not just the matter of figuring out the API. In particular, what should happen when a shader that runs after upscaling (like this) wants to modify the output size? There are at least four answers I can think of:
Therefore my suggestion, if one really wanted this feature in mpv, would be to basically take the approach 4 but code the logic for it into vo_opengl itself, eg. just subtract 45 from the “output size” if this mode is enabled, which works because we already know precisely how to reverse the computation. Incidentally, that would be another way to implement 4: make sure all size transformations are reversible isomorphisms, that way you can “compute forwards” from the video size for everything before the upscaler, “compute backwards” from the output size for everything after the upscaler, and tune the upscaling factor (which is the only parameter we can control arbitrarily) to make the two ends meet. |
|
Another attempt might be to send this feature request to ffmpeg. They already have a stereo3d filter, but no frame packaging output. See http://svn.0x00ff00ff.com/mirror/http/ffmpeg.zeranoe.com/builds/win32/shared/ffmpeg-20140211-git-6c12b1d-win32-shared.7z/doc/ffmpeg-filters.html#stereo3d So only the needed output mode needs to be added. But I don't know if this would improve performance. Also note that for 720p only 30 lines are inserted. |
Probably would make things slower and preclude hw decoding.
This frame packing stuff seems rather arbitrary. |
|
With ffmpeg 3.(1? (not checked 3.0)) this works with: For mpv the |
|
You can use the libavfilter directly via -vf lavfi, but it could be added to the mpv wrapper too. |
|
On 07/29/2016 02:09 PM, Ferdinand Thiessen wrote:
Thanks for this hint! - The commit that added this option -
|
|
I updated documentation. |
|
Because the ffmpeg developer only support FullHD 3D content for hdmi stereo3d output I had to write this script: Maybe it helps somebody. |
|
Simple resolution switching is not enough. I've just manually switched 1920x2205@24 (also tried @ 30) on Windows 10 with NVidia GPU and LG LCD TV with passive 3D support. Nothing happened. I also tried to output correct frame-packed video (with 45 blank lines in the middle) with Bomi, still no success. As far as i can see in HDMI specs, one needs to send special packets with 3D image information, and by the way they can indicate any 3D format, eg. regular OverUnder (not half) should work too. More observations on this topic in case someone is interested:
So maybe OP's TV ignores HDMI specs and goes to 3D only by checking resolution, or linux kernel/drivers check resolution and send HDMI InfoPackets (can someone prove this?). Anyway, simply saying that frame packing will make TV auto-detect 3D is not right, and to make this cross-platform, someone has to investigate how to do this on Windows. |
|
On 01/09/2017 04:05 AM, Rostislav Kirillov wrote:
Simple resolution switching is not enough.
I meanwhile also found that this seems to depend on the driver or GPU.
So maybe OP's TV ignores HDMI specs and goes to 3D only by checking resolution
Rather the the TV it seems to be driver-dependent: When I connect an older computer using
a nVidia GPU to my TV, setting the resolution alone is enough to switch it into 3D-mode.
When I connect the HDMI output of a newer Intel GPU to the very same TV, the correct
resolution is still recognized by the TV, but it does not switch into 3D mode.
So it seems this feature is after all more complicated to implement than I assumed.
And, given the decreasing relevance of 3D output in general, it might just not be
worth following up on this.
|
|
Yes it seems to be driver related, radeonSI works, some Intel APUs also work: 3D devices tested:
Graphics cards tested:
Intel:
I am not sure if it is a bug in the Nvidia drivers or a feature of the AMD / Intel drivers. What do you mean by
I can not notice a drop of 3D media publishing, e.g. there is still a lot new 3D BluRay disks getting published. |
|
On 01/09/2017 05:22 PM, Ferdinand Thiessen wrote:
What do you mean by
decreasing relevance of 3D
I can not notice a drop of 3D media publishing, e.g. there is still a lot new 3D BluRay disks
getting published.
Well, all 3D BluRay disks published in 2016 (with AFAIK only one exception) did
not contain material actually recorded using two sensors/lenses, but contain
2D movies that were only artificially "rendered into some sort of stereoscopic"
by computers, and that kind of artificial computation is similar to the 2D->3D
conversion TVs can do on their own, based a 2D input (thus not requiring mpv support).
Also, the latest high-end TVs from different vendors do no longer provide a "3D"-feature,
simply for the fact that it provides greater marketing value at lower cost to them to
not put a polarization filter in front of the display, where it will filter away
valuable luminance for the 2D use case, which is currently in focus of "HDR"-feature
advertisements of new TVs.
These two are strong indications of the decreasing relevance of 3D.
|
Sorry but that is not true. Real 3D movies released are:
Next ones coming this year are: Trolls and Moana And e.g. the Marvel movies are quite good converted (not this bullshit TV's does when converting 2d->3d, they invest some time into good looking 3D conversation). But I think this is off-topic. |
|
As far as I'm aware, HDMI does have signaling whether a stream is 3D or not. Drivers might be setting this flag randomly. Also, at least nvidia has vendor-specific API to signal those. I'm not aware of any standard API Linux has for this, but Windows might with D3D11. |
|
The drm kernel module actually supports setting the relevant 3D-Modes, but the userspace software (xrandr) currently has no corresponding switches. The hack I used last year, was to alter the drm code slightly such that it outputs the flags if the requested screen size has a form normally used for frame packing. See this "ask ubuntu" question . |
|
@lvml @frafl Hello! Not sure that you are still into that stuff, but I didn't find any better place to ask questions regarding stereoscopic video in linux, so... I am on Kubuntu Cosmic with nvidia-driver 410.57 installed for GTX 1080. After some trial-and-error attempts, I was able to switch my Samsung TV to 3D mode with the following command: Click to open my xorg.conf
|
Since this feature request might sound more difficult than it actually is, I'll start with a...
Management summary:
This is a request to implement an mpv option that causes 1920x2160 sized videos to be played as a 1920x2205 sized output, with 45 blank lines being inserted between the "upper" and "lower" half of the input video.
Rationale:
There are different methods of sending 3D Full-HD videos to TVs via HDMI, and one is readily usable with mpv: Send frames twice the size - 1920x2160 or 3840x1080 - to the TV, where the pictures for the left and right eye are either at the "top / bottom" or "left / right" of the double sized image.
The downside of this method is that the user has to manually configure the TV to switch into the correct "3D display mode" and back to 2D when the replay has finished.
There is a more convenient method specified in the HDMI 1.4a standard that is implemented by almost all contemporary TVs: By sending a so called "frame packed" signal via HDMI, which is basically a top-bottom arranged 1920x2205 pixel sized image with an unsed space of 45 blank lines in between. (See page 8 of the HDMI 1.4a 3D specification.)
This will cause the TV to automatically switch into the appropriate 3D mode. And once the display mode is changed back to 1920x1080 again, the TV will automatically switch back to 2D.
To try this feature:
Obtain some sample 1080p 3D video, e.g.:
http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_30fps_stereo_abl.mp4
Configure your X11 server to know an appropriate "Modeline" for the 1920x2205 mode, e.g. by putting this into your /etc/X11/xorg.conf:
Section "Monitor" ... Modeline "1920x2205@24" 148.32 1920 2558 2602 2750 2205 2209 2214 2250 +hsync +vsync ... EndSection ... Section "Screen" ... Option "ModeValidation" "AllowNon60HzDFPModes, NoEdidModes, NoEdidDFPMaxSizeCheck, NoVertRefreshCheck, NoHorizSyncCheck, NoMaxSizeCheck, NoDFPNativeResolutionCheck" ... EndSectionUse xrandr to switch the display into this mode (I would certainly automate this via mpv-plugin-xrandr), like this:
(Your TV should switch on 3D mode at this time.)
Replay the video with ffplay like this:
Afterwards, swith back to your normal display mode:
The downside of using ffplay like this is (a) it's not mpv and misses tons of mpv's features, (b) it's a horrible burden on the CPU, as inserting the 45 blank lines like this is awfully inefficient.
(There is one other player software supporting this - bino3d, which is also not quite as good as mpv.)
And that is why I would suggest to implement a more convenient, efficient way of inserting these 45 lines in mpv.
The text was updated successfully, but these errors were encountered: