Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lens distortion post-processing shader #4209

Merged
merged 2 commits into from Sep 13, 2016

Conversation

jasonphillips
Copy link

@jasonphillips jasonphillips commented Sep 12, 2016

This is a post-processing shader for lens distortion, in order to make the side-by-side 3D mode usable in lens-based viewers (Cardboard, etc). It allows UI-configurable levels of distortion, size adjustment, and eye separation.

Based on my prior gist (screenshots here): https://gist.github.com/jasonphillips/4cc91e85f128eae4f086118a59241ca1

The lens distortion algorithm is adapted from Google Cardboard SDKs, but the configurable sliders should allow it to be comfortably used on most lenses.

The main rationale for this is to make the base Dolphin's existing side-by-side 3D mode viable for VR viewers. I'm aware that Dolphin VR exists as a separate project, but it has the limitations of (1) being Windows-only, and (2) being a more complex, comprehensive approach (with head tracking, and sizable VR SDKs required). This shader instead provides the simple ability to enjoy the excellent existing stereo 3D capabilities of Dolphin on any platform using cheap VR viewers that are now ubiquitous.

Thoughts welcome. If this is not a reasonable addition to the core of the project, I'll just update my gist and leave it at that for the users who are interested.


This change is Reviewable

@JMC47
Copy link
Contributor

JMC47 commented Sep 12, 2016

We're planning on having a VR mode for Dolphin itself. This is something for @Armada651 to review.

@degasus
Copy link
Member

degasus commented Sep 13, 2016

@JMC47 Supporting head tracking is orthogonal to supporting different kind of distortion shaders. So this is unrelated to dolphinVR.

@degasus
Copy link
Member

degasus commented Sep 13, 2016

LGTM but those implicit int->float format convertions. By the way, both 0.0f and 0.0 are floats, so there is no need to write the "f". I don't care about it either ;)


Review status: 0 of 1 files reviewed at latest revision, 3 unresolved discussions.


Data/Sys/Shaders/lens_distortion.glsl, line 62 at r1 (raw file):

  else
  {
    offsetAdd = 0 - stereoOffset;

Please don't use implicit convertions, else OpenGL ES might fail. "0.0" or "0.0f" here.


Data/Sys/Shaders/lens_distortion.glsl, line 72 at r1 (raw file):

  // find the radius multiplier
  float srcR = destR * sizeAdjust + ( ka * pow(destR,2) + kb * pow(destR,4));

same here: 2.0 and 4.0


Data/Sys/Shaders/lens_distortion.glsl, line 87 at r1 (raw file):

  {
    // black if beyond bounds
    SetOutput(float4(0,0,0,0));

same here: 0.0, 0.0, 0.0, 0.0


Comments from Reviewable

@degasus
Copy link
Member

degasus commented Sep 13, 2016

Reviewed 1 of 1 files at r1.
Review status: all files reviewed at latest revision, 4 unresolved discussions.


Data/Sys/Shaders/lens_distortion.glsl, line 84 at r1 (raw file):

  // Sample the texture at the source location
  if (uv[0] > 1.0 || uv[1] > 1.0 || uv[0] < 0.0 || uv[1] < 0.0)

if(clamp(uv, 0.0, 1.0) != uv)


Comments from Reviewable

@CrossVR
Copy link
Contributor

CrossVR commented Sep 13, 2016

@degasus Actually distortion will be handled by whichever VR API we will use, we will not be handling distortion ourselves in a post-processing shader.

I have no problem with adding more post-processing shaders. However we do need to realize that this will be separate from the main VR effort and is not complementary to it.

I also would like to make sure that this is not perceived as our answer to the demand of VR support.

@sepalani
Copy link
Contributor

@degasus
0.0 and 0.0f are different. One is a double, the other is float, so yeah, technically they are both floats. However sometimes you might need that number to be an actual float not a double, specially with some weird interfaces. I don't know if this is the case here, though.

@CrossVR
Copy link
Contributor

CrossVR commented Sep 13, 2016

@sepalani He's talking about floats in the context of shaders, in shaders floating-point numbers default to single precision so the f specifier is redundant.

@degasus
Copy link
Member

degasus commented Sep 13, 2016

:lgtm:


Reviewed 1 of 1 files at r2.
Review status: all files reviewed at latest revision, all discussions resolved, some commit checks failed.


Comments from Reviewable

@jasonphillips
Copy link
Author

Updated in response to the review, thanks.

I understand that the VR SDKs will eventually handle lens distortion, and I also appreciate the desire to avoid confusing any users with the impression that this shader is the primary VR support. So I'm certainly not offended if this isn't deemed to be in scope for a shader right now.

I do think that many users would appreciate it as-is though, particularly since it's a simple way to apply lens distortion today without having to wait for full VR support, and it allows you to apply the effect directly if you don't have a full VR setup available (for instance, right now I've been playing on Google Cardboard with the image sent to an iPhone using Duet Display, which has the benefit of streaming over a lightning cable with very low latency -- but that application doesn't support head tracking, so all I needed was lens distortion on its own to gain a benefit in this case).

@degasus degasus merged commit 54a643a into dolphin-emu:master Sep 13, 2016
@delroth
Copy link
Member

delroth commented Sep 24, 2016

Hey @jasonphillips

We'd like to make sure that everything was done right in terms of licensing here. Could you clarify what exactly was taken from the Google VR SDK? Pointing to the original shader code that you ported would be great.

Thanks!

@CrossVR
Copy link
Contributor

CrossVR commented Sep 24, 2016

@delroth We should double-check, but AFAIK the Cardboard SDK is apache licensed though most of it is closed-source.

@delroth
Copy link
Member

delroth commented Sep 24, 2016

It would still need an appropriate copyright header, which this doesn't have. I wish people would review more carefully before merging changes...

@jasonphillips
Copy link
Author

Sure, I looked back at my notes and these were all my initial sources, for review:

  • I began by looking at a small excerpt (in this gist: https://gist.github.com/Endox/4750473) derived from the Oculus distortion shader, which is almost just a textbook example of applying barrel distortion in a fragment shader. Compare and you'll see that the basic outlined steps (get radius, source vector, transform, sample) are what I adopted; nothing non-generic remains from this other than maybe a couple variable names (destR, correctedRadial) and a couple of their structuring comments ("Calculate the source vector (radial)" etc, which I left to outline my steps as well).
  • then the official documentation for the Google Cardboard SDK, where it gives their basic pincushion formula:

image

That basic formula (which again is a fairly standard one) is what I adopted as the actual transformation in my code (this line), by similarly using r^2 and r^4, and letting coefficients (just multiplied by the slider I added in the UI) set those two values. Of course, that's also a standard formula, and it's not even adopted in exactly the same form.

Then I added a number of other adjustments (NDC conversions had to be adjusted, which layer for which eye rendered by Dolphin) and UI bits using Dolphin's API for configurable shaders (slider for adjusting aspect ratio since that often ends up distorted when mirroring Dolphin to a device, a zoom adjustment, etcetera).

@delroth
Copy link
Member

delroth commented Sep 24, 2016

OK, so if my understanding is correct there is very little of the original code left, if any, and really the only information was from documented formulas. If so I don't think there are any issues here. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
6 participants