Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VR Support #2

Open
ddutchie opened this issue Apr 30, 2019 · 12 comments
Open

VR Support #2

ddutchie opened this issue Apr 30, 2019 · 12 comments
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
Milestone

Comments

@ddutchie
Copy link

This is great, and results are pretty good.
I would like to get this working with Single Pass VR,

Not too familiar with SRP so not sure where to add any of this.
But I have found some documentation which should help:

VR Support Custom Render Pipeline

VR in HDRP
Commit from the github Setting up some Shader Variables

Any ideas?

@Looooong
Copy link
Owner

Looooong commented May 1, 2019

The ScriptableRenderContext does support stereo rendering with StartMultiEye and StopMultiEye. However, I'm not familiar with stereo rendering either. I can guide you to add this feature on the code, though, be advised that the code is very messy right now (I'm planning to refactor to organize and make it easier to read).

About the render pipeline, I have devoted everything to defered rendering path, because forward rendering is not optimal for this lighting technique. The defered rendering path is implemented here. You probably know that there are 2 main steps:

  1. Render to G-buffers.
  2. Render from G-buffers to screen. This step includes 2 passes:
    a) Render indirect diffuse lighting to a separate render texture, which can have the same or lower resolution than the screen resolution for performance reason. This render texture then can be scaled up to match screen resolution.
    b) The indirect diffuse lighting result is merged with other light shading calculation to form the final composite image.

About the shader, you will have to edit Assets/Shaders/VXGI.shader to modify the transformation of vertex position to make it work with multi-eyes rendering.

  • Step 1 is implemented here.
  • Step 2a is implemented here.
  • Step 2b is implemented here.

@Looooong Looooong self-assigned this May 1, 2019
@Looooong Looooong added enhancement New feature or request help wanted Extra attention is needed labels May 1, 2019
@Looooong Looooong added this to To do in VXGI May 1, 2019
@ddutchie
Copy link
Author

ddutchie commented Aug 7, 2019

I'm attempting this again.
I have managed to implement rendering to the headset.
Now I am trying to track down your shaders to add the Stereo shader variables but your recent changes have rearranged things quite bit.

Adding VR support is generally simple if you know the shaders

see here:
https://docs.unity3d.com/Manual/SinglePassStereoRendering.html

Basically we need to tell uv that we are rendering in VR.

In some cases it is just modifying one line.
See the first line of the frag function here:

https://github.com/ddutchie/UnityPostFX_VR/blob/d336312553de207bed19873f178a6b458bf71785/VR%20PostFX/Graphic%20Novel/GraphicNovel.shader#L67

If these changes are made nothing will change for a non VR user. These macros only affect the shader with stereo rendering.

@Looooong
Copy link
Owner

Looooong commented Aug 7, 2019

I will spend some time to research about this and will try to implement it in the next weekend.

@Looooong Looooong added this to the v0.0.1 milestone Aug 7, 2019
@ddutchie
Copy link
Author

ddutchie commented Aug 7, 2019

I have actually gotten really close

if I make some changes to VXGIRenderer like so:


using UnityEngine;
using UnityEngine.Experimental.Rendering;
using UnityEngine.Rendering;
using UnityEngine.Rendering.PostProcessing;

public class VXGIRenderer : System.IDisposable {
  public enum MipmapSampler {
    Linear,
    Point
  }

  public DrawRendererFlags drawRendererFlags {
    get { return _renderPipeline.drawRendererFlags; }
  }
  public RendererConfiguration rendererConfiguration {
    get { return _renderPipeline.rendererConfiguration; }
  }

  int _cameraDepthTextureID;
  int _cameraDepthNormalsTextureID;
  int _cameraGBufferTexture0ID;
  int _cameraGBufferTexture1ID;
  int _cameraGBufferTexture2ID;
  int _cameraGBufferTexture3ID;
  int _dummyID;
  int _dummyID2;
  int _frameBufferID;
  float[] _renderScale;
  CommandBuffer _command;
  CullResults _cullResults;
  FilterRenderersSettings _filterSettings;
  LightingShader[] _lightingPasses;
  PostProcessRenderContext _postProcessRenderContext;
  RenderTargetBinding _gBufferBinding;
  VXGIRenderPipeline _renderPipeline;

  public VXGIRenderer(VXGIRenderPipeline renderPipeline) {
    _command = new CommandBuffer { name = "VXGIRenderer" };
    _filterSettings = new FilterRenderersSettings(true) { renderQueueRange = RenderQueueRange.all };
    _renderPipeline = renderPipeline;
    _cameraDepthTextureID = Shader.PropertyToID("_CameraDepthTexture");
    _cameraDepthNormalsTextureID = Shader.PropertyToID("_CameraDepthNormalsTexture");
    _cameraGBufferTexture0ID = Shader.PropertyToID("_CameraGBufferTexture0");
    _cameraGBufferTexture1ID = Shader.PropertyToID("_CameraGBufferTexture1");
    _cameraGBufferTexture2ID = Shader.PropertyToID("_CameraGBufferTexture2");
    _cameraGBufferTexture3ID = Shader.PropertyToID("_CameraGBufferTexture3");
    _dummyID = Shader.PropertyToID("Dummy");
    _frameBufferID = Shader.PropertyToID("FrameBuffer");

    _gBufferBinding = new RenderTargetBinding(
      new RenderTargetIdentifier[] { _cameraGBufferTexture0ID, _cameraGBufferTexture1ID, _cameraGBufferTexture2ID, _cameraGBufferTexture3ID },
      new[] { RenderBufferLoadAction.DontCare, RenderBufferLoadAction.DontCare, RenderBufferLoadAction.DontCare, RenderBufferLoadAction.DontCare },
      new[] { RenderBufferStoreAction.DontCare, RenderBufferStoreAction.DontCare, RenderBufferStoreAction.DontCare, RenderBufferStoreAction.DontCare },
      _cameraDepthTextureID, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare
    );

    _renderScale = new float[] { 1f, 1f, 1f, 1f };

    _lightingPasses = new LightingShader[] {
      new LightingShader(LightingShader.Pass.Emission),
      new LightingShader(LightingShader.Pass.DirectDiffuseSpecular),
      new LightingShader(LightingShader.Pass.IndirectDiffuse),
      new LightingShader(LightingShader.Pass.IndirectSpecular)
    };

    _postProcessRenderContext = new PostProcessRenderContext();
  }

  public void Dispose() {
    _command.Dispose();

    foreach (var pass in _lightingPasses) pass.Dispose();
  }

  public void RenderDeferred(ScriptableRenderContext renderContext, Camera camera, VXGI vxgi) {
    ScriptableCullingParameters cullingParams;
    if (!CullResults.GetCullingParameters(camera, out cullingParams)) return;
    CullResults.Cull(ref cullingParams, renderContext, ref _cullResults);

    //Initialize Stereo

    renderContext.SetupCameraProperties(camera,  camera.stereoEnabled);
   if (camera.stereoEnabled)
            {
    renderContext.StartMultiEye(camera);
            }
    int width = camera.pixelWidth;
    int height = camera.pixelHeight;

    _command.GetTemporaryRT(_cameraDepthTextureID, width, height, 24, FilterMode.Point, RenderTextureFormat.Depth, RenderTextureReadWrite.Linear);
    _command.GetTemporaryRT(_cameraGBufferTexture0ID, width, height, 0, FilterMode.Point, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);
    _command.GetTemporaryRT(_cameraGBufferTexture1ID, width, height, 0, FilterMode.Point, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);
    _command.GetTemporaryRT(_cameraGBufferTexture2ID, width, height, 0, FilterMode.Point, RenderTextureFormat.ARGB2101010, RenderTextureReadWrite.Linear);
    _command.GetTemporaryRT(_cameraGBufferTexture3ID, width, height, 0, FilterMode.Point, RenderTextureFormat.ARGBHalf, RenderTextureReadWrite.Linear);
    _command.GetTemporaryRT(_frameBufferID, width, height, 0, FilterMode.Point, RenderTextureFormat.ARGBHalf, RenderTextureReadWrite.Linear);
    _command.SetRenderTarget(_gBufferBinding);
    _command.ClearRenderTarget(true, true, Color.clear);
    renderContext.ExecuteCommandBuffer(_command);
    _command.Clear();

   
 
    var drawSettings = new DrawRendererSettings(camera, new ShaderPassName("Deferred"));
    drawSettings.flags = _renderPipeline.drawRendererFlags;
    drawSettings.rendererConfiguration = _renderPipeline.rendererConfiguration;
    drawSettings.sorting.flags = SortFlags.CommonOpaque;

    renderContext.DrawRenderers(_cullResults.visibleRenderers, ref drawSettings, _filterSettings);
     
    if (camera.cameraType != CameraType.SceneView) {
      _command.EnableShaderKeyword("PROJECTION_PARAMS_X");
    } else {
      _command.DisableShaderKeyword("PROJECTION_PARAMS_X");
    }

   _command.GetTemporaryRT(_dummyID,width, height, 0, FilterMode.Point, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);
  _command.Blit(_cameraDepthTextureID, BuiltinRenderTextureType.CameraTarget, UtilityShader.material, (int)UtilityShader.Pass.DepthCopy);

//It seems there is an issue everywhere we sample this
// _command.Blit(BuiltinRenderTextureType.CameraTarget, _dummyID);
   _command.Blit(_dummyID, _frameBufferID, UtilityShader.material, (int)UtilityShader.Pass.GrabCopy);
   _command.ReleaseTemporaryRT(_dummyID);
   renderContext.ExecuteCommandBuffer(_command);
    _command.Clear();

    Matrix4x4 clipToWorld = camera.cameraToWorldMatrix * GL.GetGPUProjectionMatrix(camera.projectionMatrix,false).inverse;

    _command.SetGlobalMatrix("ClipToWorld", clipToWorld);
    _command.SetGlobalMatrix("ClipToVoxel", vxgi.worldToVoxel * clipToWorld);
    _command.SetGlobalMatrix("WorldToVoxel", vxgi.worldToVoxel);
    _command.SetGlobalMatrix("VoxelToWorld", vxgi.voxelToWorld);

    bool depthNormalsNeeded = (camera.depthTextureMode & DepthTextureMode.DepthNormals) != DepthTextureMode.None;

    if (depthNormalsNeeded) {
      _command.GetTemporaryRT(_cameraDepthNormalsTextureID, width, height, 0, FilterMode.Point, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear);
      _command.Blit(_cameraDepthTextureID, _cameraDepthNormalsTextureID, UtilityShader.material, (int)UtilityShader.Pass.EncodeDepthNormal);
    }

    renderContext.ExecuteCommandBuffer(_command);
    _command.Clear();

    _renderScale[2] = vxgi.diffuseResolutionScale;

    for (int i = 0; i < _lightingPasses.Length; i++) {
      _lightingPasses[i].Execute(renderContext, camera, _frameBufferID, _renderScale[i]);
    }

    RenderPostProcessing(renderContext, camera);

    _command.Blit(_frameBufferID, BuiltinRenderTextureType.CameraTarget);

    RenderPostProcessingDebug(renderContext, camera);

    if (depthNormalsNeeded) {
      _command.ReleaseTemporaryRT(_cameraDepthNormalsTextureID);
    }

    _command.ReleaseTemporaryRT(_cameraDepthTextureID);
    _command.ReleaseTemporaryRT(_cameraGBufferTexture0ID);
    _command.ReleaseTemporaryRT(_cameraGBufferTexture1ID);
    _command.ReleaseTemporaryRT(_cameraGBufferTexture2ID);
    _command.ReleaseTemporaryRT(_cameraGBufferTexture3ID);
    _command.ReleaseTemporaryRT(_frameBufferID);
    renderContext.ExecuteCommandBuffer(_command);

    //Deactivate Stereo
    if (camera.stereoEnabled)
    {
    renderContext.StopMultiEye(camera);
    renderContext.StereoEndRender(camera);
    }
                            
    _command.Clear();
  }

  public void RenderMipmap(ScriptableRenderContext renderContext, Camera camera, VXGI vxgi) {
    var transform = Matrix4x4.TRS(vxgi.origin, Quaternion.identity, Vector3.one * vxgi.bound);

    _command.BeginSample(_command.name);

    if (vxgi.mipmapSampler == MipmapSampler.Point) {
      _command.EnableShaderKeyword("RADIANCE_POINT_SAMPLER");
    } else {
      _command.DisableShaderKeyword("RADIANCE_POINT_SAMPLER");
    }

    _command.SetGlobalFloat("MipmapLevel", Mathf.Min(vxgi.level, vxgi.radiances.Length));
    _command.SetGlobalFloat("TracingStep", vxgi.step);
    _command.DrawProcedural(transform, VisualizationShader.material, (int)VisualizationShader.Pass.Mipmap, MeshTopology.Quads, 24, 1);

    _command.EndSample(_command.name);

    renderContext.ExecuteCommandBuffer(_command);

    _command.Clear();
  }

  public void RenderPostProcessing(ScriptableRenderContext renderContext, Camera camera) {
    var layer = camera.GetComponent<PostProcessLayer>();

    if (layer == null || !layer.isActiveAndEnabled) return;

    _command.GetTemporaryRT(_dummyID, camera.pixelWidth, camera.pixelHeight, 0, FilterMode.Point, RenderTextureFormat.ARGBHalf, RenderTextureReadWrite.Linear);

    _postProcessRenderContext.Reset();
    _postProcessRenderContext.camera = camera;
    _postProcessRenderContext.command = _command;
    _postProcessRenderContext.destination = _frameBufferID;
    _postProcessRenderContext.source = _dummyID;
    _postProcessRenderContext.sourceFormat = RenderTextureFormat.ARGBHalf;

    if (layer.HasOpaqueOnlyEffects(_postProcessRenderContext)) {
      _command.Blit(_frameBufferID, _dummyID);
      layer.RenderOpaqueOnly(_postProcessRenderContext);
    }

    _command.Blit(_frameBufferID, _dummyID);
    layer.Render(_postProcessRenderContext);

    _command.ReleaseTemporaryRT(_dummyID);
    renderContext.ExecuteCommandBuffer(_command);
    _command.Clear();
  }

  public void RenderPostProcessingDebug(ScriptableRenderContext renderContext, Camera camera) {
    var postProcessDebug = camera.GetComponent<PostProcessDebug>();

    if (postProcessDebug == null) return;

    postProcessDebug.SendMessage("OnPostRender");

    foreach (var command in camera.GetCommandBuffers(CameraEvent.AfterImageEffects)) {
      renderContext.ExecuteCommandBuffer(command);
    }
  }
}


Note _command.Blit(BuiltinRenderTextureType.CameraTarget, _dummyID); breaks things so there is an issue here??

So I rerouted the blit for debug purposes.

Then BlitSupport.hlsl :

#ifndef VXGI_BLIT_SUPPORT_HLSL
#define VXGI_BLIT_SUPPORT_HLSL

#if defined(UNITY_REVERSED_Z)
  #define DEPTH_TO_CLIP_Z(depth) depth
#else
  #define DEPTH_TO_CLIP_Z(depth) mad(2.0, depth, -1.0)
#endif

struct BlitInput
{
  float4 vertex : SV_POSITION;
  float2 uv : TEXCOORD;
  UNITY_VERTEX_OUTPUT_STEREO

  
};


BlitInput BlitVertex(appdata_base v)
{
  BlitInput o;
  o.vertex = UnityObjectToClipPos(v.vertex);
  o.uv = UnityStereoTransformScreenSpaceTex(v.texcoord);
  return o;
}

#endif

Everything (Geometry wise) is correct in the headset, but gi calcs uvs seem off...

Excuse the rotation, it was the only way to show the issue.
image

If you look at the image you will notice that the GI is offset. It looks like all lighting passes uvs are stretched accross both eyes based on the code changes in the BlitSupport Shader.
It looks like these calcs should not be split, even when VR.

Also the camera matrix or something is off as there is clipping below the horizon.

image

To give you an idea of what that "UnityStereoTransformScreenSpaceTex" shader change does see below

Without change:
image

With change:
image

As you may notice it ensures the correct uv adjustment accross eyes.

@ddutchie
Copy link
Author

ddutchie commented Aug 7, 2019

Looking at my post I see that GI is already wrong before the change

Matrix must be the issue here. Fixing that means we have it working. That and the issue stated above regarding Command buffer blitting of BuiltinRenderTextureType.CameraTarget

This is hard to debug as it works fine in editor without vr

@Looooong
Copy link
Owner

Looooong commented Aug 7, 2019

I suppose it is because I didn't use Unity built-in helper ComputeScreenPos to calculate screen space position. This helper will take stereo rendering into account if it is enabled. You can edit the screen space position calculation in ShaderLibrary/BlitSupport.hlsl. The screen space position is stored in BlitInput#uv. So you will probably have to change every shader code that use this variable.

@ddutchie
Copy link
Author

ddutchie commented Aug 7, 2019

Made some more progress

Visualizing Mipmap is correctly sampling the position in the world.
image

It seems the sampling of the voxels is not centered in VR
image

@ddutchie
Copy link
Author

ddutchie commented Aug 7, 2019

And the shadows, not sure where this happens in the rendering. Slowly working through the project.

@ddutchie
Copy link
Author

ddutchie commented Aug 7, 2019

My experiments here
https://github.com/ddutchie/Unity-SRP-VXGI

@Looooong
Copy link
Owner

Looooong commented Aug 8, 2019

I calculate world space position from screen UV in here:

float4 worldPosition = mul(ClipToWorld, float4(mad(2.0, i.uv, -1.0), DEPTH_TO_CLIP_Z(depth), 1.0));
data.worldPosition = worldPosition.xyz / worldPosition.w;

If the UV format changed, the world space position is affected as well. I suggest that you should follow Post-processing effects section which uses UnityStereoScreenSpaceUVAdjust in fragment shader instead of vertex shader. For other part of the code, the UV value should not be changed to ensure that the correct pixel is sampled from screen space textures.

I haven't test yet and these are just my theories. This weekend, I'm gonna be busy and I probably won't have time to support you so much. If you ever need help, try using frame debugger or write a shader code to visualize VR results.

@ddutchie
Copy link
Author

ddutchie commented Aug 8, 2019

These theories prove correct.

Making some changes there has fixed the left eye completely.

Thanks for the direction. Now to fix the right eye and find out why blitting the default texture gives issues in VR.

So strange as I have screen-space effects using this everywhere in another project.

@ddutchie
Copy link
Author

ddutchie commented Aug 8, 2019

Scratch that. Fixed. Just trying to track down matrix issues with right eye its only happening in lighting shader so its the UV being passed there that is incorrect.

Left eye looks delicious!

Will push changes to my branch. Thanks for the help. Hopefully I have some time this weekend to track it down.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
VXGI
  
To do
Development

No branches or pull requests

2 participants