Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dolby Vision with wrong colors #7326

Closed
Doofussy2 opened this issue Jan 5, 2020 · 59 comments
Closed

Dolby Vision with wrong colors #7326

Doofussy2 opened this issue Jan 5, 2020 · 59 comments

Comments

@Doofussy2
Copy link

mpv version and platform

Windows 10

mpv 0.31.0-11-g49cbc5017c Copyright © 2000-2019 mpv/MPlayer/mplayer2 projects
  built on Sun Dec 29 09:31:44 +08 2019
  ffmpeg library versions:
  libavutil       56.38.100
  libavcodec      58.65.100
  libavformat     58.35.101
  libswscale      5.6.100
  libavfilter     7.69.101
  libswresample   3.6.100
  ffmpeg version: git-2019-12-28-6399eed4

Reproduction steps

Play some Dolby Vision videos. Not all videos play this way, and I tested with and without an mpv.conf. Same result.

Expected behavior

Play with correct color

Actual behavior

Picture plays in purple and green
mpv-shot0001

Log file

Portable mpv log.txt

Sample files

Can be found, here

@Akemi
Copy link
Member

Akemi commented Jan 5, 2020

reminds me a bit of #4340, though it isn't the same issue.

@haasn
Copy link
Member

haasn commented Jan 5, 2020

There's little we can do here because Dolby Vision is a closed spec.

There's a branch that sort of generates the approximate result, but not for all files, and not quite correctly. The specifics of the encoding depends on proprietary metadata which we haven't managed to decode / reverse engineer yet.

@Doofussy2
Copy link
Author

Ah, ok so it's going to be hit and miss whether a video will play, correctly. Am I to understand that you guys are actually working on it?

@haasn
Copy link
Member

haasn commented Jan 5, 2020

It would be hit or miss on whether a video will play "vaguely watcheable but obviously wrong" or "completely and utterly wrong".

I'm not actively working on it, no. Unless any progress is made on reverse engineering the proprietary dolby vision metadata. If you want to help you could gather sample clips that fall into different "categories" (not all profile 5 media has the same kind of color shift) and seeing what parts of the dolby black box metadata changes in response. That might help pinpointing where in the metadata the color space info is.

@Doofussy2
Copy link
Author

Doofussy2 commented Jan 5, 2020

Well, the first comparative examples I can give, are:

Correct color

https://www.demolandia.net/downloads.html?id=1039472921
Annotation 2020-01-04 175831
mpv-shot0002

Incorrect color

https://www.demolandia.net/downloads.html?id=543675873
Annotation 2020-01-04 175831

I don't know if that is useful, but there does appear to be a disparity in the HDR format. Note the BL+RPU and BL+EL+RPU. I have no idea what difference the 'EL' makes.

Of the samples I have, the files that have incorrect color, don't have that 'EL'.

@Doofussy2
Copy link
Author

Doofussy2 commented Jan 5, 2020

OK, I grabbed more DV samples. The pattern, holds. All of the videos that don't have that 'EL', don't play correctly.

@ghost
Copy link

ghost commented Jan 5, 2020

I just wanted to say: FUCK Dolby and their racketeering-like patent trolling. Fuck them for re-introducing proprietary bullshit into modern video technology too. I wish they'd fucking die already.

@Doofussy2
Copy link
Author

Well yeah, there's that lol. I kind of agree. Proprietary 'secret sauce' stuff is a bit nonsensical.

@Doofussy2
Copy link
Author

Stream dumping video from here, the videos that play correctly have HDR10 metadata, included. So, obviously they will play correctly. The ones that don't play correctly, also don't have 'EL' in the HDR format.

@haasn
Copy link
Member

haasn commented Feb 10, 2020

@ValZapod ITU-R ICtCp != Dolby Vision IPT

@jeeb
Copy link
Member

jeeb commented Feb 26, 2020

@ValZapod Dolby Vision Profile 5 is not standard ICtCp. Dolby made sure you cannot just implement a standard thing they published, and get their newest marketing buzzword implemented.

With regards to link, fun. Esp. the part about non-thread-safeness. Might be worth checking since it does contain references to reshaping.

Some research was done earlier into this stuff https://code.videolan.org/videolan/libplacebo/commit/f850fa2839f9b679092e721068a57b0404608bdc , which lead to semi-OK results, but also weird switches at certain frames. A la https://0x0.st/zIBI.pnghttps://0x0.st/zIBl.png
(this is based on Dolby's profile 5 test sample from developer.dolby.com/tools-media/sample-media/video-streams/dolby-vision-streams/)

@haasn
Copy link
Member

haasn commented Feb 26, 2020

Also note, we are missing three parts here: first, pivots and second level polinomials, second, NLQ (Nonlinear quantization) approximation, third, cross talk matrix! See

We're not missing the cross-talk matrix. We just bake it into the IPT coefficient matrix. (e.g. standard ITU-R ICtCp has a crosstalk of c=0.02 embedded in the standardized coefficients).

But yes, we're missing the polynomial reshaping bits. That's the stuff that there's no public documentation for, especially for the exact bitstream. (Though there is that one patent which provides an example bitstream that doesn't appear to match reality? Unless I'm mistaken)

If you can figure out how to get these coefficients into the decoding pipeline somehow (e.g. as side data) we can implement the polynomial bits with relative ease.

@igorzep
Copy link

igorzep commented Feb 27, 2020

@ValZapod Dolby Vision Profile 5 is not standard ICtCp.

Sure it is not. It was the basis for ICtCp. Dolby did good work at researching the color spaces but ICtCp adds the last bit - optimise the space so it better fits into the T/P range so all code points are fully utilised for BT2020 color space source.

Dolby made sure you cannot just implement a standard thing they published, and get their newest marketing buzzword implemented.

Looking at how specs are evolving it rather seems they simply released raw unfinished product, which was not acceptable to put into standards.

With regards to link, fun. Esp. the part about non-thread-safeness. Might be worth checking since it does contain references to reshaping.

Thanks :) The part about thread safety is probably outdated already and just left there undeleted from my first experiments on composing a DVp5 compatible file. :)

Some research was done earlier into this stuff https://code.videolan.org/videolan/libplacebo/commit/f850fa2839f9b679092e721068a57b0404608bdc , which lead to semi-OK results, but also weird switches at certain frames. A la https://0x0.st/zIBI.pnghttps://0x0.st/zIBl.png

I done mine back in 2018 and found pretty much information about the DVp5 internals (which on large part is identical to DVp7, which is pretty well documented in patents). The hardest part was the IPTPQc2 color space (which took a ~two months just to find out it's (semi-official) name so further docs could be found which specified the crosstalk and reshaping matrixes that were not embedded into the RGB to LMS matrix same way as is done with ICtCp).

I am doing DVp5 not for purpuses of decoding but for encoding (creating test patterns). But problems I face are exactly the same as the process is exact reverse. The progress so far... it is not ideal, I still miss a point on why encoded video look undersaturated on the TV. One guess about why is that the data is assumed to cover BT2020 color space and the TV is more like P3... And contrary to other DV formats DVp5 carries no data about mastering display color space so TV can't decide where to compress and where just trim. Still, the colours are right and I don't see any shifts (except of lowering saturation) on several ColorChecker patterns. So, in this sense my code is more correct then, so is worth checking and comparing approaches.

@ghost
Copy link

ghost commented Feb 27, 2020

Dolby did good work

Please refrain from comments that could be interpreted as extremely offensive.

@igorzep
Copy link

igorzep commented Feb 27, 2020

Hey! It is not dolby that created IPT, it is a dissertation you can download on sci-hub!
Ebner; Fairchild (1998)

Ebner & Fairchild IPT is not the Dolby IPTPQc2 either. Dolby adopted the way IPT is composed but over a PQ transfer function which in the end is completely different color space designed for different objectives (or just more objectives than was in the original).

Here is the patent from Dolby:
http://www.freepatentsonline.com/y2018/0131938.html

@igorzep
Copy link

igorzep commented Feb 27, 2020

But yes, we're missing the polynomial reshaping bits. That's the stuff that there's no public documentation for, especially for the exact bitstream. (Though there is that one patent which provides an example bitstream that doesn't appear to match reality? Unless I'm mistaken)

Actually it matches the reality pretty well. This patent specs profile 7 (used on BD discs). But with two tiny additions it matches also profile 5 (used in streaming). Those additions are:
a) rpu_format = 18
b) no NLQ section as there is no EL layer in profile 5
see https://github.com/testing-av/testing-video/blob/9256b779f7721bd3dbdceedab7c65b6f64bbab79/patterns/vendor/dv/u4k/src/test/java/band/full/test/video/patterns/u4k/dv/DolbyVisionProfile5.java#L95

@haasn
Copy link
Member

haasn commented Feb 27, 2020

Fantastic work on digging all of that up. I'm sure we can benefit greatly from your work.

I'm a bit preoccupied by life events these days so I don't have the energy left over to look into it, but I've cloned all the code just in case.

@haasn
Copy link
Member

haasn commented Feb 27, 2020

We inherently support dynamic metadata because our shaders are "re-generated" every code based on the metadata attached to each individual frame. (Although we might want to double check that we don't trigger unnecessary shader recompilation as a result of changes to static HDR metadata). So it depends entirely on how fast libavcodec updates the side data. Which, as of when I last looked at this code, gets cached and re-sent per frame.

We don't support DTM at all. Patches welcome. (Especially with regards to making sure this information is exposed as avframe side data)

@igorzep
Copy link

igorzep commented Feb 28, 2020

@igorzep Can you show me where is that matrix in US20180131938A1?
I mean, on page 11, subsection 12 there is that matrix with 1, 1, 1 but insteed of 0.13318 it is actually 0.1132, where did you get that one (it is the only wrong number in matrix)???

public static final short[] IPTPQ_YCCtoRGB_coef = {
        8192, 799, 1681, // 1.0,
        8192, -933, 1091, // 1.0,
        8192, 267, -5545, // 1.0,
    };

This is the actual data in the RPU of Dolby Vision profile 5 files. The number in patent seem to be (deliberately or not) an error. I've found a lot of such errors in many published pre-calculated matrixes, even in some ITU recommendations. Some of them are just numerical errors due to calculating them in low precision floating points, some of them are typos when transferring the data. This is why I always try to find from what data the matrix is derived and calculate values myself.

@haasn
Copy link
Member

haasn commented Feb 29, 2020

@ValZapod We are software, not hardware. HDMI standards are irrelevant here, unless you care about metadata passthrough (I don't).

Also, BTW, do we support Superwhite of YCBCR when there is extra white things (more than 235 or 240)? Is there a standard to this?

No. We intentionally clip incoming signals to the nominal peak, because not all gamma curves are well-defined outside the interval [0.0, 1.0]. We could make exceptions for those curves, but there's never been a pressing need to display sub-blacks or super-whites in any form of media accessible to us, and quite honestly, I don't see what the point is supposed to be.

ITU-R standards seem to "allow" it, but they don't describe under which circumstances it would be useful for content production. My understanding is that the display of these value ranges are reserved for signal level calibration purposes, and I don't think that's a use case that really applies to us.

@igorzep There's some room for interpretation of whether or not we should use the rounded or "exact" values of constants from standards, given that what matters is not the "theoretical" number but the number that will be used during production.

@TimeAlter
Copy link

TimeAlter commented Mar 10, 2020

Hi guys I've been following this thread from a couple of days, I've been doing some testing regarding the decode process involving a dolby vision profile 5 video. which does not involve NLQ as you mention previously since those videos does not contain EL (Enhancement layer) as far as my understanding. My problem is that I'm kinda confused as for the reshaping process of the stream. My understanding is that once you retrieve the data from the YUV 420 you should reshape the values using the polynomial coefficients in order to retrieve the IPT values as part of a pre-processing process before applying the dolby's proprietary color space conversion matrix (IPT -> LSM -> .... ->RGB) described in the patent to retrieve the "original image" (without the metadata applied). So I don't know if this assumption is correct or I missing something, I just wanted to check with you guys if you could clear those doubts to me since I've been trying to decode some DV5 demo videos unsuccessfully. I know this is kinda random but i wanted to know if you guys could help me out clearing out those questions. Thanks in advance.

@igorzep
Copy link

igorzep commented Mar 17, 2020

The reshaping part with polynomials is simple. Also look here how I encode it:
https://github.com/testing-av/testing-video/blob/9256b779f7721bd3dbdceedab7c65b6f64bbab79/patterns/vendor/dv/u4k/src/test/java/band/full/test/video/patterns/u4k/dv/DolbyVisionProfile5.java#L140

param.poly_order_minus1 = 0; // first order
param.linear_interp_flag = false;
// nominal (0.0 + 1.0 * x) I' reshaping
param.f_poly_coef = new int[] {0, 1 << 23};

when decoding take the number of coefficients in array according to the order, they will be
a[0]*x^0 + a[1]*x^1 + a[2]*x^2 + a[3]*x^3 + a[4]*x^4 + ...
where x^0 == 1.0 and a[n] is either IEEE-754-2008 32 bit float as is (coefficient_data_type != 0) or otherwise fixed point number in [-64, 64) range (so you need to rescale it to floating point before calculations), see here:
https://github.com/testing-av/testing-video/blob/9256b779f7721bd3dbdceedab7c65b6f64bbab79/core/src/main/java/band/full/video/dolby/VdrRpuDataPayload.java

Which polynomial is to apply is defined by pivot points.
For T and P values you also need to subtract 0.5 (actually defined by YCCtoRGB_offset).

When you decode fragments by pivots as a verification you can visualise it - should get a smooth curve as the result.

@Doofussy2
Copy link
Author

I don't know if this is helpful, but this guy appears to have done it. I installed the add-on and it works.

https://youtu.be/-t7c7t5v5VI

@Doofussy2
Copy link
Author

Yeah, GPU decoding

DV Decode

@justdan96
Copy link

@ValZapod at very least we should be able to test output with the sample files - thanks!

@TimeAlter
Copy link

TimeAlter commented Apr 13, 2020

The reshaping part with polynomials is simple. Also look here how I encode it:
https://github.com/testing-av/testing-video/blob/9256b779f7721bd3dbdceedab7c65b6f64bbab79/patterns/vendor/dv/u4k/src/test/java/band/full/test/video/patterns/u4k/dv/DolbyVisionProfile5.java#L140

param.poly_order_minus1 = 0; // first order
param.linear_interp_flag = false;
// nominal (0.0 + 1.0 * x) I' reshaping
param.f_poly_coef = new int[] {0, 1 << 23};

when decoding take the number of coefficients in array according to the order, they will be
a[0]*x^0 + a[1]*x^1 + a[2]*x^2 + a[3]*x^3 + a[4]*x^4 + ...
where x^0 == 1.0 and a[n] is either IEEE-754-2008 32 bit float as is (coefficient_data_type != 0) or otherwise fixed point number in [-64, 64) range (so you need to rescale it to floating point before calculations), see here:
https://github.com/testing-av/testing-video/blob/9256b779f7721bd3dbdceedab7c65b6f64bbab79/core/src/main/java/band/full/video/dolby/VdrRpuDataPayload.java

Which polynomial is to apply is defined by pivot points.
For T and P values you also need to subtract 0.5 (actually defined by YCCtoRGB_offset).

When you decode fragments by pivots as a verification you can visualise it - should get a smooth curve as the result.

@igorzep Thanks for the information, I had great advance based on your comments. Still I was checking the information related to IPTPQ_YCCtoLMS_coef, but still I cannot understand how they're calculated, based on my decode process for DV5 videos, those coefficients are dynamically generated among the metadata, but still I don't know if you have an idea about if those mean to be the ones to perform linear LMS to XYZ or RGB? any information would be appreciated. Thanks in advance.

At this point I can retrieve the pivots and retrieve the IPT code based on the pivot's segment polynomial now for my understanding, I substract the YCCtoRGB_offset values and then I apply the IPT2LMS matrix and then apply the PQ transfer function, but after that is where I don't get how the LMS should be converted back to RGB, One of the dolby vision patents provides a matrix for LMS2XYZ but as I said earlier when decoding frames metadata, RGBroLMS_coef seems to be different for frame/scene.

My process at the moment would be
Decode DV5 video
Retrieve YUV [IPT that needs reshaping]
Reshape the components based on pivots boundaries and polynomials
Y -> apply polynomial -> I
U -> apply polynomial -> P
V -> apply polynomial -> T

Substract YCCtoRGB_offset from IPT respectively

Apply YCC2RGB(IPT2LMS) matrix

apply PQ transfer function to retrieve L' M' S'

Apply new matrix based on RGBtoLMS_coef?

@Doofussy2
Copy link
Author

@jeeb and @rossy any chance of implementing DV passthrough on Windows? This would be great if you could make that work.

@jeeb
Copy link
Member

jeeb commented Sep 5, 2020

No idea how such an interface would look to begin with, and I am much more into getting proper handling of this color space as opposed to enabling playback for those whose displays happen to have this feature enabled.

Given that the thing left was to decode those not-in-spec type 62 NAL units with the dynamic parameters and stick them in, I'd say this would be very much possible.

@Doofussy2
Copy link
Author

Has there been any development on deciphering the metadata?

@jult
Copy link

jult commented Jan 22, 2022

I just wanted to say: FUCK Dolby and their racketeering-like patent trolling. Fuck them for re-introducing proprietary bullshit into modern video technology too. I wish they'd fucking die already.

I agree, I was trying to watch a movie a friend of mine is working on. It was, for some godawful reason, encoded in Atmos DV HEVC format and it is just horribly purple-green BS. Takes me hours to get this damn format to play with the correct colors. Why even do this? WHO NEEDS IT? I don't remember ever asking for more than we already have, in video quality-land.

@ericgl
Copy link

ericgl commented Jan 23, 2022

ValZapod,
What do you mean "That is fixed now with this mpv player"?
Are you saying that the developers of MPV player managed to reverse-engineer the DolbyVision algorithm, and display the movie with correct colors on monitors/TVs which do not have DolbyVision support?
Because if this is true, this is great news!!

@haasn
Copy link
Member

haasn commented Jan 23, 2022

Yeah, indeed, this is supported now with --vo=gpu-next and sufficiently recent ffmpeg/libplacebo (ffmpeg 5.0, libplacebo 4.192.0).

@haasn haasn closed this as completed Jan 23, 2022
@j9ac9k
Copy link

j9ac9k commented Jan 23, 2022

For context, I can confirm this works on the 20220116-git-1ba0547 build I see on Windows with the following mpv.conf

vo=gpu-next
gpu-context=winvk  # had to add this one otherwise gpu-next didn't work

Thanks for the work here on this, if I'm seeing this correctly, mpv is the first video player to support this functionality.

@jult
Copy link

jult commented Jan 24, 2022

encoded in Atmos DV HEVC format

Atmos is sound, not video.

So? I was listing the full spec.

horribly purple-green BS

That is fixed now with this mpv player.

Except it isn't, I came here because it does not show correct colors.

Why even do this?

This is a new colorspace which allows to create more quality out of nothing.

Again: Why all the effort? Aren't we at the limit for video quality, as far as demand goes? I don't see any demand for higher res or larger (or better compressed) color space with anyone I know.
Sure, innovation is nice, but not when it is done because somehow they had a budget to spoil on dev teams that were bored out of their mind. This is essentially what ruins humanity. This is precisely why humans will go extinct; Their constant fantasy that growth or extension or expansion is needed or required. It isn't. It's digging your own grave.

@Doofussy2
Copy link
Author

It's already fixed. You need to update mpv and learn how to configure for tone mapping DV. Or wait until the full implementation is in place.

@mmozeiko
Copy link

I can confirm that this is fixed now - works for me great with latest mpv build, no more green & purple colors on DV content I have.

@haasn
Copy link
Member

haasn commented Jan 24, 2022

Sure, innovation is nice, but not when it is done because somehow they had a budget to spoil on dev teams that were bored out of their mind. This is essentially what ruins humanity. This is precisely why humans will go extinct; Their constant fantasy that growth or extension or expansion is needed or required. It isn't. It's digging your own grave.

Sir, this is a wendy's bug tracker

@Doofussy2
Copy link
Author

Doofussy2 commented Jan 24, 2022

I'm gonna go ahead and thank the developer for his great work.

Thank you @haasn

@szabolcs-cs
Copy link

szabolcs-cs commented Jan 25, 2022

I'd like to thank everyone involved for their work on this!

Is it possible to combine this with tone mapping?
When I use

vo=gpu-next
gpu-context=winvk

with

target-prim=dci-p3
hdr-compute-peak=yes
tone-mapping=bt.2390

(or any other algorithm) tone mapping doesn't happen. Do I have to set tone mapping parameters manually somehow until this is fully supported?

@quietvoid
Copy link
Contributor

quietvoid commented Jan 25, 2022

Is it possible to combine this with tone mapping?

It should support tone mapping if you have an up to date mpv. That was added around ~2 weeks ago in libplacebo.

@szabolcs-cs
Copy link

It should support tone mapping if you have an up to date mpv. That was added around ~2 weeks ago in libplacebo.

I'm using the 20220116-git-1ba0547 version which also has support for Dolby Vision. After further investigation it seems that using vo=gpu-next is incompatible with tone mapping in my case.

@ericgl
Copy link

ericgl commented Feb 3, 2022

ValZapod,
Thank you for the command line. First time I managed to play a DolbyVision encoded video with normal colors.
Bravo!!

@cinnamonmatexfce
Copy link

As an expert (compared with LG C9) I can say profile now works 100% correct natively in PQ space (that means HDR is enabled in windows) (RGB_FULL_G2084_NONE_P2020) with

mpv.com --target-trc=pq --target-prim=bt.2020 --vo=gpu-next --gpu-context=d3d11 file.mp4/mkv/ts

Download here: https://github.com/ValZapod/mpv-winbuild/releases/download/2022-02-03/mpv-x86_64-20220202-git-67a2b28.7z

@ValZapod
Do I need to change something in your suggested setting?
I have a nVIDIA GTX1660 SUPER and TV is a Samsung Q70R 2019 model year.

I already enabled HDR mode in Windows 10.

Thank you

@Aurareus
Copy link

Is this fixed on Linux yet or just Windows?
Using vo=gpu-next does not fix the issue for me.

@haasn
Copy link
Member

haasn commented Mar 28, 2022

Is this fixed on Linux yet or just Windows? Using vo=gpu-next does not fix the issue for me.

It is fixed on all platforms (that run vo=gpu-next) but you need a new enough ffmpeg (ffmpeg 5.0) for it to work. Many distros etc. still ship ffmpeg 4,x which does not include Dolby Vision decoding.

@Popyacap
Copy link

Hello, I am new here and I am not very knowledgeable about this stuff but I enjoy learning. Currently I am trying to play DV video and I am having the green red hue problem. What I am seeking to know is how to set up mpv player. I already have ffmpeg installed but am unclear about libplacebo. I am using Windows 10. I am having trouble understanding "vo=gpu-next" as to what it is and how to 'enter' it. Sorry if this is something that has been answered and I am missing it or is being asked in the wrong place. I am not the swiftest cat with forums. I can follow instructions but I am a noob with a lot of this. Thanks anyone who can help.

@cinnamonmatexfce
Copy link

Hello, I am new here and I am not very knowledgeable about this stuff but I enjoy learning. Currently I am trying to play DV video and I am having the green red hue problem. What I am seeking to know is how to set up mpv player. I already have ffmpeg installed but am unclear about libplacebo. I am using Windows 10. I am having trouble understanding "vo=gpu-next" as to what it is and how to 'enter' it. Sorry if this is something that has been answered and I am missing it or is being asked in the wrong place. I am not the swiftest cat with forums. I can follow instructions but I am a noob with a lot of this. Thanks anyone who can help.

Using prompt go inside the folder where the mpv.exe is and then go with:
mpv.exe --vo=gpu-next C:\path\to\video\filename.ext

@ghost
Copy link

ghost commented Jan 21, 2023

Yeah, indeed, this is supported now with --vo=gpu-next and sufficiently recent ffmpeg/libplacebo (ffmpeg 5.0, libplacebo 4.192.0).

Can ffmpeg convert DV to HDR/SDR now?

@haasn
Copy link
Member

haasn commented Jan 23, 2023

Yeah, indeed, this is supported now with --vo=gpu-next and sufficiently recent ffmpeg/libplacebo (ffmpeg 5.0, libplacebo 4.192.0).

Can ffmpeg convert DV to HDR/SDR now?

Yes, see https://ffmpeg.org/ffmpeg-filters.html#Examples-90

It is done automatically by the vf_libplacebo filter when DV metadata is present.

@stefan1983
Copy link

stefan1983 commented Apr 12, 2023

How would one be able to use it within macOS then?

Warning: ffmpeg 5.1.2_6 is already installed and up-to-date.
To reinstall 5.1.2_6, run:
  brew reinstall ffmpeg
sg-nb3:Downloads sg$ mpv --target-trc=pq --target-prim=bt.2020 vo=gpu-next dolby-vision-nasa-\(dolby-vision\)-\(www.demolandia.net\).mp4 
Playing: vo=gpu-next
[file] Cannot open file 'vo=gpu-next': No such file or directory
Failed to open vo=gpu-next.

Playing: dolby-vision-nasa-(dolby-vision)-(www.demolandia.net).mp4
 (+) Video --vid=1 (*) (hevc 3840x2160 59.940fps)
 (+) Audio --aid=1 (*) (eac3 6ch 48000Hz)
AO: [coreaudio] 48000Hz stereo 2ch floatp
VO: [libmpv] 3840x2160 yuv420p10
AV: 00:00:09 / 00:02:23 (7%) A-V:  0.000 Dropped: 152

Exiting... (Quit)
sg-nb3:Downloads sg$ 

@hooke007
Copy link
Contributor

hooke007 commented Apr 12, 2023

No gpu-next in Mac #11308 #10978 #11301

@stefan1983
Copy link

stefan1983 commented Apr 12, 2023

OK, so no dolby vision playback on macOS possible at the moment?

I tried to play the testfile as mentioned in the original post, but with the same result.

@Akczht
Copy link

Akczht commented May 4, 2023

how can I play hdr files on macOS Apple Silicon, I've compiled mpv myself, it does recognise the file as hdr and shows proper colors to use 2020-ncl*, but it just doesnt look as bright/colorful(maybe), like the same video played over safari, it's a lg oled video https://youtu.be/njX2bu-_Vw4

@h-2
Copy link

h-2 commented Jul 22, 2023

I have the following versions installed:

ffmpeg-6.0_1,1                 Realtime audio/video encoder/converter and streaming server
ffmpeg4-4.4.4_2                Realtime audio/video encoder/converter and streaming server (legacy 4.* series)
libplacebo-6.292.0             Reusable library for GPU-accelerated video/image rendering
mpv-0.35.1_5,1                 Free and open-source general-purpose video player
nvidia-driver-535.54.03        NVidia graphics card binary drivers for hardware OpenGL rendering

And I have set vo=gpu-next, but I still get funky colours. Is there anything else I need to do?

edit: it's weird how it affects certain parts of the image and not others (depends on scene).

image

but gpu-next certanly looks better then for example XV:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests