Skip to content

zhjzhjxzhl/GPU-Ray-Tracing-in-One-Weekend-by-Unity-2019.3

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

GPU Ray Tracing in One Weekend by Unity 2019.3

Version 1.0, 2019-Oct-23
Copyright 2019. ZHing. All rights received.

目录:

1. 综述

本文基于 https://raytracing.github.io 的《Ray Tracing in One Weekend》,介绍如何使用Unity 2019.3、SRP和DXR实现Ray Tracing。 因此在阅读本文之前,需要首先阅读《Ray Tracing in One Weekend》。本文不会对《Ray Tracing in One Weekend》文中已经解释清楚的算法做重复解释。本文中提到的“原文”均指“Ray Tracing in One Weekend”。

本文的重点放在如何在Unity 2019.3中实现同原文一样的Ray Tracing渲染。

avatar

由于使用了基于GPU加速的DXR,因此本文的所有案例渲染速度比原文要快得多得多,但前提需要支持DXR的硬件和系统来运行。

此外由于目前Unity 2019.3目前集成的DXR并不支持‎Intersection Shader,因此无法想原文中一样使用Procedural Geometry,将使用Mesh代替。

1.1. 准备工作环境

本文所述实现,基于Unity 2019.3集成的DXR。因此需要对Unity工程进行基本的设置。

avatar

设置Graphics API为Direct3D12为第一项

1.2. 利用SPR调用DXR

因为使用了SPR和SIMD数学库,因此需要至少导入如下Unity Package。

avatar avatar

1.3. 本文所使用SRP框架介绍

本文所有例程都基于一个最简SRP框架实习,详细请参看工程源代码,这里只做简单介绍。

RayTracingRenderPipelineAsset继承自RenderPipelineAsset,SRP的核心类,用于创建RayTracingRenderPipeline。

RayTracingRenderPipeline继承自RenderPipeline,SRP的核心类。Render函数为整个渲染流程的入口。

RayTracingTutorialAsset为各个例程资源的基类。继承自ScriptableObject,保存了各个例程所需要用到的Shader引用。

RayTracingTutorial为各个例程的基类。由RayTracingRenderPipeline驱动进行各个例程的渲染工作。

2. 输出图像

例程类:OutputColorTutorial

场景文件:1_OutputColorTutorialScene

利用Ray Tracing输出图像,可谓Ray Tracing中的Hello World。

2.1. 在Unity中创建RayTraceShader

avatar

#pragma max_recursion_depth 1

RWTexture2D<float4> _OutputTarget;

[shader("raygeneration")]
void OutputColorRayGenShader()
{
  uint2 dispatchIdx = DispatchRaysIndex().xy;
  uint2 dispatchDim = DispatchRaysDimensions().xy;

  _OutputTarget[dispatchIdx] = float4((float)dispatchIdx.x / dispatchDim.x, (float)dispatchIdx.y / dispatchDim.y, 0.2f, 1.0f);
}

DispatchRaysIndex().xy用于返回当前像素的位置

DispatchRaysDimensions().xy用于返回当前渲染目标的尺寸

以上代码功能为从左到右将像素的R通道从0到1变化,从下到上将像素的G通道从0到1变化,所有像素的B通道保持0.2不变。

2.2. C#中在SRP管线。

var outputTarget = RequireOutputTarget(camera);

var cmd = CommandBufferPool.Get(typeof(OutputColorTutorial).Name);
try
{
  using (new ProfilingSample(cmd, "RayTracing"))
  {
    cmd.SetRayTracingTextureParam(_shader, _outputTargetShaderId, outputTarget);
    cmd.DispatchRays(_shader, "OutputColorRayGenShader", (uint) outputTarget.rt.width, (uint) outputTarget.rt.height, 1, camera);
  }
  context.ExecuteCommandBuffer(cmd);

  using (new ProfilingSample(cmd, "FinalBlit"))
  {
    cmd.Blit(outputTarget, BuiltinRenderTextureType.CameraTarget, Vector2.one, Vector2.zero);
  }
  context.ExecuteCommandBuffer(cmd);
}
finally
{
  CommandBufferPool.Release(cmd);
}

RequireOutputTarget函数根据当前将要渲染的相机设置获取对应的渲染目标。详细实现参看本文所附工程源代码。

cmd.SetRayTracingTextureParam用于将渲染目标设置到以上RayTrace Shader中。_shader为Tracing Shader对象。

参数*_outputTargetShaderId*通过Shader.PropertyToID("_OutputTarget")方式得到,本文后续都通过此方法获取将不再赘述。

cmd.DispatchRays用于调用之前RayTrace Shader中的raygeneration函数OutputColorRayGenShader进行Ray Tracing。

由于RayTrace Shader只能渲染到RenderTarget上,在屏幕上并看不到渲染结果,因此最后需要进行一次Blit操作将渲染结果绘制到屏幕上。后续渲染都有此步骤将不再赘述。

2.3. 最终输出

avatar

3. 输出背景

例程类:BackgroundTutorial

场景文件:2_BackgroundTutorialScene

利用Ray Tracing输出渐变背景

3.1. 在Unity中创建RayTraceShader

inline void GenerateCameraRay(out float3 origin, out float3 direction)
{
  // center in the middle of the pixel.
  float2 xy = DispatchRaysIndex().xy + 0.5f;
  float2 screenPos = xy / DispatchRaysDimensions().xy * 2.0f - 1.0f;

  // Un project the pixel coordinate into a ray.
  float4 world = mul(_InvCameraViewProj, float4(screenPos, 0, 1));

  world.xyz /= world.w;
  origin = _WorldSpaceCameraPos.xyz;
  direction = normalize(world.xyz - origin);
}

inline float3 Color(float3 origin, float3 direction)
{
  float t = 0.5f * (direction.y + 1.0f);
  return (1.0f - t) * float3(1.0f, 1.0f, 1.0f) + t * float3(0.5f, 0.7f, 1.0f);
}

[shader("raygeneration")]
void BackgroundRayGenShader()
{
  const uint2 dispatchIdx = DispatchRaysIndex().xy;

  float3 origin;
  float3 direction;
  GenerateCameraRay(origin, direction);

  _OutputTarget[dispatchIdx] = float4(Color(origin, direction), 1.0f);
}

GenerateCameraRay用于在当前像素位置构建一条光线的Origin和Direction,此处和原文稍有不同。

首先计算出屏幕坐标在Project Space中的位置screenPos。然后利用C#中计算好的_InvCameraViewProj矩阵将其变换到World Space从而获得光线方向directionorigin直接通过C#中传入的_WorldSpaceCameraPos获得。

Color函数和原文一致,用于计算从上到下的渐变。

3.2. C#代码

C#中设置相机参数

Shader.SetGlobalVector(CameraShaderParams._WorldSpaceCameraPos, camera.transform.position);
var projMatrix = GL.GetGPUProjectionMatrix(camera.projectionMatrix, false);
var viewMatrix = camera.worldToCameraMatrix;
var viewProjMatrix = projMatrix * viewMatrix;
var invViewProjMatrix = Matrix4x4.Inverse(viewProjMatrix);
Shader.SetGlobalMatrix(CameraShaderParams._InvCameraViewProj, invViewProjMatrix);

设置_WorldSpaceCameraPos和_InvCameraViewProj。

C#中在SRP管线中的代码如下。

var outputTarget = RequireOutputTarget(camera);

var cmd = CommandBufferPool.Get(typeof(OutputColorTutorial).Name);
try
{
  using (new ProfilingSample(cmd, "RayTracing"))
  {
    cmd.SetRayTracingTextureParam(_shader, _outputTargetShaderId, outputTarget);
    cmd.DispatchRays(_shader, "BackgroundRayGenShader", (uint) outputTarget.rt.width, (uint) outputTarget.rt.height, 1, camera);
  }
  context.ExecuteCommandBuffer(cmd);

  using (new ProfilingSample(cmd, "FinalBlit"))
  {
    cmd.Blit(outputTarget, BuiltinRenderTextureType.CameraTarget, Vector2.one, Vector2.zero);
  }
  context.ExecuteCommandBuffer(cmd);
}
finally
{
  CommandBufferPool.Release(cmd);
}

此处和之前不同之处仅仅为调用RayTrace Shader不同。

3.3. 最终输出

avatar

4. 渲染Sphere

例程类:AddASphereTutorial

场景文件:3_AddASphereTutorialScene

利用Ray Tracing绘制一个球体。注意,由于Unity目前集成的DXR并不支持‎Intersection Shader,因此无法像原文中一样使用Procedural Geometry。改用一个球体Mesh进行绘制,对应FBX文件已放入源代码项目中。

4.1. 在Unity中创建RayTraceShader

struct RayIntersection
{
  float4 color;
};

inline float3 BackgroundColor(float3 origin, float3 direction)
{
  float t = 0.5f * (direction.y + 1.0f);
  return (1.0f - t) * float3(1.0f, 1.0f, 1.0f) + t * float3(0.5f, 0.7f, 1.0f);
}

[shader("raygeneration")]
void AddASphereRayGenShader()
{
  const uint2 dispatchIdx = DispatchRaysIndex().xy;

  float3 origin;
  float3 direction;
  GenerateCameraRay(origin, direction);

  RayDesc rayDescriptor;
  rayDescriptor.Origin = origin;
  rayDescriptor.Direction = direction;
  rayDescriptor.TMin = 1e-5f;
  rayDescriptor.TMax = _CameraFarDistance;

  RayIntersection rayIntersection;
  rayIntersection.color = float4(0.0f, 0.0f, 0.0f, 0.0f);

  TraceRay(_AccelerationStructure, RAY_FLAG_CULL_BACK_FACING_TRIANGLES, 0xFF, 0, 1, 0, rayDescriptor, rayIntersection);

  _OutputTarget[dispatchIdx] = rayIntersection.color;
}

[shader("miss")]
void MissShader(inout RayIntersection rayIntersection : SV_RayPayload)
{
  float3 origin = WorldRayOrigin();
  float3 direction = WorldRayDirection();
  rayIntersection.color = float4(BackgroundColor(origin, direction), 1.0f);
}

在raygeneration shader中调用了TraceRay函数发射光线,此函数的用法请参考Microsoft DXR文档。这里只解释一下RayDescRayIntersection结构,RayDesc为DXR中描述一条光线的数据结构,Origin为光线的起点,Direction为光线的方向,TMin为t的最小值,TMax为t的最大值(代码中为相机远裁剪面距离)。Ray Intersection结构为用户自定义的Ray Payload数据结构,用于在Ray Tracing过程中传递数据。此处定义的color用于返回Ray Trace的结果颜色。

此处增加了miss shader。miss shader将在没有任何光线碰撞被检测到时执行,此处调用BackgroundColor返回第3节中的渐变背景色。

4.2. 创建球体Shader

在Unity中创建一个普通的shader文件,在其末尾添加如下代码。

SubShader
{
  Pass
  {
    Name "RayTracing"
    Tags { "LightMode" = "RayTracing" }

    HLSLPROGRAM

    #pragma raytracing test

    struct RayIntersection
    {
      float4 color;
    };

    CBUFFER_START(UnityPerMaterial)
    float4 _Color;
    CBUFFER_END

    [shader("closesthit")]
    void ClosestHitShader(inout RayIntersection rayIntersection : SV_RayPayload, AttributeData attributeData : SV_IntersectionAttributes)
    {
      rayIntersection.color = _Color;
    }

    ENDHLSL
  }
}

此处定义了另一个SubShader并在其中增加了一个名为RayTracing的Pass。Pass名字可以随意定义,后续C#中将会看到具体指定Pass的代码。

#pragma raytracing test表示这是一个Ray Trace Pass。

closesthit shader ClosestHitShader接收两个参数,第一个参数rayIntersection为之前传递的Ray Payload数据,第二个参数attributeData为发生碰撞的相关数据,本例并未使用,仅仅简单的返回一个颜色数据。注意:一个RayTracing Pass中有且只能有一个closesthit shader,Unity 2019.3目前并不支持多hit group。关于hit group参看Microsoft DXR文档

创建好Shader后建立相应的Material,并在场景中创建一个球体附上相应的Material。

4.3. C#代码

C#中设置相机参数同第3节内容。

C#中在SRP管线中的代码如下。

var outputTarget = RequireOutputTarget(camera);

var accelerationStructure = _pipeline.RequestAccelerationStructure();

var cmd = CommandBufferPool.Get(typeof(OutputColorTutorial).Name);
try
{
  using (new ProfilingSample(cmd, "RayTracing"))
  {
    cmd.SetRayTracingShaderPass(_shader, "RayTracing");
    cmd.SetRayTracingAccelerationStructure(_shader, _pipeline.accelerationStructureShaderId, accelerationStructure);
    cmd.SetRayTracingTextureParam(_shader, _outputTargetShaderId, outputTarget);
    cmd.DispatchRays(_shader, "AddASphereRayGenShader", (uint) outputTarget.rt.width, (uint) outputTarget.rt.height, 1, camera);
  }

  context.ExecuteCommandBuffer(cmd);

  using (new ProfilingSample(cmd, "FinalBlit"))
  {
    cmd.Blit(outputTarget, BuiltinRenderTextureType.CameraTarget, Vector2.one, Vector2.zero);
  }

  context.ExecuteCommandBuffer(cmd);
}
finally
{
  CommandBufferPool.Release(cmd);
}

本例中由于需要对球体进行Ray Trace检测,因此使用到了加速结构。RequestAccelerationStructure用于获取加速结构对象,通过SetRayTracingAccelerationStructure作为参数传递到Ray Trace Shader中。关于Unity中加速结构的使用,参看:Unity Scripting API - RayTracingAccelerationStructure

SetRayTracingShaderPass用于设定当前Ray Tracing所需要执行的Pass,既4.2节中指定的RayTracing Pass

4.4. 最终输出

avatar

5. 法线输出

例程类:AddASphereTutorial

场景文件:4_SurfaceNormalTutorialScene

本例在第4节例程的基础上修改而来,仅仅修改了物体本身的Shader。

5.1. 创建球体Shader

#include "UnityRaytracingMeshUtils.cginc"

struct RayIntersection
{
  float4 color;
};

struct IntersectionVertex
{
  // Object space normal of the vertex
  float3 normalOS;
};

void FetchIntersectionVertex(uint vertexIndex, out IntersectionVertex outVertex)
{
  outVertex.normalOS = UnityRayTracingFetchVertexAttribute3(vertexIndex, kVertexAttributeNormal);
}

[shader("closesthit")]
void ClosestHitShader(inout RayIntersection rayIntersection : SV_RayPayload, AttributeData attributeData : SV_IntersectionAttributes)
{
  // Fetch the indices of the currentr triangle
  uint3 triangleIndices = UnityRayTracingFetchTriangleIndices(PrimitiveIndex());

  // Fetch the 3 vertices
  IntersectionVertex v0, v1, v2;
  FetchIntersectionVertex(triangleIndices.x, v0);
  FetchIntersectionVertex(triangleIndices.y, v1);
  FetchIntersectionVertex(triangleIndices.z, v2);

  // Compute the full barycentric coordinates
  float3 barycentricCoordinates = float3(1.0 - attributeData.barycentrics.x - attributeData.barycentrics.y, attributeData.barycentrics.x, attributeData.barycentrics.y);

  float3 normalOS = INTERPOLATE_RAYTRACING_ATTRIBUTE(v0.normalOS, v1.normalOS, v2.normalOS, barycentricCoordinates);
  float3x3 objectToWorld = (float3x3)ObjectToWorld3x4();
  float3 normalWS = normalize(mul(objectToWorld, normalOS));

  rayIntersection.color = float4(0.5f * (normalWS + 1.0f), 0);
}

UnityRayTracingFetchTriangleIndices为Unity实现的Utils函数,用于根据PrimitiveIndex返回的索引信息获取Ray Tracing碰撞三角形的的索引值。

IntersectionVertex结构定义了我们所关心的Ray Tracing碰撞三角形的顶点信息。

FetchIntersectionVertex用于填充IntersectionVertex数据,其内部调用的UnityRayTracingFetchVertexAttribute3函数为Unity实现的Utils函数,用于获取对应的顶点数据。此例获取了Object Space的顶点Normal数据。

INTERPOLATE_RAYTRACING_ATTRIBUTE用于在碰撞三角形的三个顶点中插值计算出碰撞点的具体信息。

ObjectToWorld3x4为DXR builtin函数,用于将顶点数据从Object Space变换到World Space。

最后将最终得到的World Space中的normalWS输出到颜色。

5.2. 最终输出

avatar

6. Antialiasing

例程类:AntialiasingTutorial

场景文件:5_AntialiasingTutorialScene

如果放大第5节中的最终输出图像会发现存在很严重的锯齿问题。

avatar

原文中处理此问题采用在一个像素内多次采样然后取平均值的方法,在DXR中如果使用同样方法将导致绘制一帧的时间大大增加,变得卡顿。因此在此例中使用Accumulate Average Sample的方法。

6.1. 在Unity中创建RayTraceShader

struct RayIntersection
{
  uint4 PRNGStates;
  float4 color;
};

inline void GenerateCameraRayWithOffset(out float3 origin, out float3 direction, float2 offset)
{
  float2 xy = DispatchRaysIndex().xy + offset;
  float2 screenPos = xy / DispatchRaysDimensions().xy * 2.0f - 1.0f;

  // Un project the pixel coordinate into a ray.
  float4 world = mul(_InvCameraViewProj, float4(screenPos, 0, 1));

  world.xyz /= world.w;
  origin = _WorldSpaceCameraPos.xyz;
  direction = normalize(world.xyz - origin);
}

[shader("raygeneration")]
void AntialiasingRayGenShader()
{
  const uint2 dispatchIdx = DispatchRaysIndex().xy;
  const uint PRNGIndex = dispatchIdx.y * (int)_OutputTargetSize.x + dispatchIdx.x;
  uint4 PRNGStates = _PRNGStates[PRNGIndex];

  float4 finalColor = float4(0, 0, 0, 0);
  {
    float3 origin;
    float3 direction;
    float2 offset = float2(GetRandomValue(PRNGStates), GetRandomValue(PRNGStates));
    GenerateCameraRayWithOffset(origin, direction, offset);

    RayDesc rayDescriptor;
    rayDescriptor.Origin = origin;
    rayDescriptor.Direction = direction;
    rayDescriptor.TMin = 1e-5f;
    rayDescriptor.TMax = _CameraFarDistance;

    RayIntersection rayIntersection;
    rayIntersection.PRNGStates = PRNGStates;
    rayIntersection.color = float4(0.0f, 0.0f, 0.0f, 0.0f);

    TraceRay(_AccelerationStructure, RAY_FLAG_CULL_BACK_FACING_TRIANGLES, 0xFF, 0, 1, 0, rayDescriptor, rayIntersection);
    PRNGStates = rayIntersection.PRNGStates;
    finalColor += rayIntersection.color;
  }

  _PRNGStates[PRNGIndex] = PRNGStates;
  if (_FrameIndex > 1)
  {
    float a = 1.0f / (float)_FrameIndex;
    finalColor = _OutputTarget[dispatchIdx] * (1.0f - a) + finalColor * a;
  }

  _OutputTarget[dispatchIdx] = finalColor;
}

GenerateCameraRayWithOffset用于给该像素产生的光线做一个偏移操作,偏移值由GetRandomValue得到。

GetRandomValue获取随机数的方法采用了《GPU Gems 3》中“Chapter 37. Efficient Random Number Generation and Application Using CUDA”节的方法,具体实现可参看GPU Gems 3和本项目源代码。需要特别指出的是,GPU进行Ray Tracing时是多个像素并行计算的,因此随机数产生器的States需要每个像素相对独立。因此在RayIntersection数据结构中增加了PRNGStates用于保存随机数产生器的状态。C#中在RayTracingRenderPipeline中的RequirePRNGStates函数对其进行初始化,具体过程参看源代码。

_FrameIndex为当前渲染到第几帧的索引值,如果此值大于1则将当前帧的数据和之前的数据进行累加平均操作,否则直接将当前帧数据写入渲染目标中。

6.2. C#代码

var outputTarget = RequireOutputTarget(camera);
var outputTargetSize = RequireOutputTargetSize(camera);

var accelerationStructure = _pipeline.RequestAccelerationStructure();
var PRNGStates = _pipeline.RequirePRNGStates(camera);

var cmd = CommandBufferPool.Get(typeof(OutputColorTutorial).Name);
try
{
  if (_frameIndex < 1000)
  {
    using (new ProfilingSample(cmd, "RayTracing"))
    {
      cmd.SetRayTracingShaderPass(_shader, "RayTracing");
      cmd.SetRayTracingAccelerationStructure(_shader, _pipeline.accelerationStructureShaderId,
        accelerationStructure);
      cmd.SetRayTracingIntParam(_shader, _frameIndexShaderId, _frameIndex);
      cmd.SetRayTracingBufferParam(_shader, _PRNGStatesShaderId, PRNGStates);
      cmd.SetRayTracingTextureParam(_shader, _outputTargetShaderId, outputTarget);
      cmd.SetRayTracingVectorParam(_shader, _outputTargetSizeShaderId, outputTargetSize);
      cmd.DispatchRays(_shader, "AntialiasingRayGenShader", (uint) outputTarget.rt.width,
        (uint) outputTarget.rt.height, 1, camera);
    }

    context.ExecuteCommandBuffer(cmd);
    if (camera.cameraType == CameraType.Game)
      _frameIndex++;
  }

  using (new ProfilingSample(cmd, "FinalBlit"))
  {
    cmd.Blit(outputTarget, BuiltinRenderTextureType.CameraTarget, Vector2.one, Vector2.zero);
  }

  context.ExecuteCommandBuffer(cmd);
}
finally
{
  CommandBufferPool.Release(cmd);
}

RequireOutputTargetSize用于获取当前渲染目标的大小。

RequirePRNGStates用于获取随机数生成器状态Buffer。

_frameIndex为当前渲染到第几帧的索引值,累积到1000后停止渲染。

6.3. 最终输出

avatar

7. Diffuse材质

例程类:AntialiasingTutorial

场景文件:6_DiffuseTutorialScene

关于Diffuse的实现参考原文。

7.1. RayTrace Shader

代码基本同前一节相同,区别在于Ray Payload增加了remainingDepth字段,标识当前光线还剩余几次递归可用,最大递归次数由MAX_DEPTH设定。注意:MAX_DEPTH值需要小于max_recursion_depth定义的值减1。

RayIntersection rayIntersection;
rayIntersection.remainingDepth = MAX_DEPTH - 1;
rayIntersection.PRNGStates = PRNGStates;
rayIntersection.color = float4(0.0f, 0.0f, 0.0f, 0.0f);

TraceRay(_AccelerationStructure, RAY_FLAG_CULL_BACK_FACING_TRIANGLES, 0xFF, 0, 1, 0, rayDescriptor, rayIntersection);
PRNGStates = rayIntersection.PRNGStates;
finalColor += rayIntersection.color;

7.2. 创建物体Shader及其Material

ClosestHitShader代码如下

[shader("closesthit")]
void ClosestHitShader(inout RayIntersection rayIntersection : SV_RayPayload, AttributeData attributeData : SV_IntersectionAttributes)
{
  // Fetch the indices of the currentr triangle
  // Fetch the 3 vertices
  // Compute the full barycentric coordinates
  // Get normal in world space.
  ...
  float3 normalWS = normalize(mul(objectToWorld, normalOS));

  float4 color = float4(0, 0, 0, 1);
  if (rayIntersection.remainingDepth > 0)
  {
    // Get position in world space.
    float3 origin = WorldRayOrigin();
    float3 direction = WorldRayDirection();
    float t = RayTCurrent();
    float3 positionWS = origin + direction * t;

    // Make reflection ray.
    RayDesc rayDescriptor;
    rayDescriptor.Origin = positionWS + 0.001f * normalWS;
    rayDescriptor.Direction = normalize(normalWS + GetRandomOnUnitSphere(rayIntersection.PRNGStates));
    rayDescriptor.TMin = 1e-5f;
    rayDescriptor.TMax = _CameraFarDistance;

    // Tracing reflection.
    RayIntersection reflectionRayIntersection;
    reflectionRayIntersection.remainingDepth = rayIntersection.remainingDepth - 1;
    reflectionRayIntersection.PRNGStates = rayIntersection.PRNGStates;
    reflectionRayIntersection.color = float4(0.0f, 0.0f, 0.0f, 0.0f);

    TraceRay(_AccelerationStructure, RAY_FLAG_CULL_BACK_FACING_TRIANGLES, 0xFF, 0, 1, 0, rayDescriptor, reflectionRayIntersection);

    rayIntersection.PRNGStates = reflectionRayIntersection.PRNGStates;
    color = reflectionRayIntersection.color;
  }

  rayIntersection.color = _Color * 0.5f * color;
}

计算normalWS的方法和前一节相同,不再赘述。

rayIntersection.remainingDepth大于0时将使用原文中的方法进行Diffuse计算,再次调用TraceRay进行Ray Tracing递归计算。

GetRandomOnUnitSphere返回单位球体上均匀分布的随机向量。

7.3. 最终输出

avatar

8. Dielectrics材质

例程类:AntialiasingTutorial

场景文件:8_DielectricsTutorialScene

原文中在使用负数半径的方法来达到玻璃泡的效果,由于Unity中我们无法使用Intersection Shader,因此里添加了一个新的物体Shader DielectricsInv来反转Normal达到相同的效果。

8.1. 物体ClosestHitShader

inline float schlick(float cosine, float IOR)
{
  float r0 = (1.0f - IOR) / (1.0f + IOR);
  r0 = r0 * r0;
  return r0 + (1.0f - r0) * pow((1.0f - cosine), 5.0f);
}

[shader("closesthit")]
void ClosestHitShader(inout RayIntersection rayIntersection : SV_RayPayload, AttributeData attributeData : SV_IntersectionAttributes)
{
  // Fetch the indices of the currentr triangle
  // Fetch the 3 vertices
  // Compute the full barycentric coordinates
  // Get normal in world space.
  ...
  float3 normalWS = normalize(mul(objectToWorld, normalOS));

  float4 color = float4(0, 0, 0, 1);
  if (rayIntersection.remainingDepth > 0)
  {
    // Get position in world space.
    ...
    float3 positionWS = origin + direction * t;

    // Make reflection & refraction ray.
    float3 outwardNormal;
    float niOverNt;
    float reflectProb;
    float cosine;
    if (dot(-direction, normalWS) > 0.0f)
    {
      outwardNormal = normalWS;
      niOverNt = 1.0f / _IOR;
      cosine = _IOR * dot(-direction, normalWS);
    }
    else
    {
      outwardNormal = -normalWS;
      niOverNt = _IOR;
      cosine = -dot(-direction, normalWS);
    }
    reflectProb = schlick(cosine, _IOR);

    float3 scatteredDir;
    if (GetRandomValue(rayIntersection.PRNGStates) < reflectProb)
      scatteredDir = reflect(direction, normalWS);
    else
      scatteredDir = refract(direction, outwardNormal, niOverNt);

    RayDesc rayDescriptor;
    rayDescriptor.Origin = positionWS + 1e-5f * scatteredDir;
    rayDescriptor.Direction = scatteredDir;
    rayDescriptor.TMin = 1e-5f;
    rayDescriptor.TMax = _CameraFarDistance;

    // Tracing reflection or refraction.
    RayIntersection reflectionRayIntersection;
    reflectionRayIntersection.remainingDepth = rayIntersection.remainingDepth - 1;
    reflectionRayIntersection.PRNGStates = rayIntersection.PRNGStates;
    reflectionRayIntersection.color = float4(0.0f, 0.0f, 0.0f, 0.0f);

    TraceRay(_AccelerationStructure, RAY_FLAG_NONE, 0xFF, 0, 1, 0, rayDescriptor, reflectionRayIntersection);

    rayIntersection.PRNGStates = reflectionRayIntersection.PRNGStates;
    color = reflectionRayIntersection.color;
  }

  rayIntersection.color = _Color * color;
}

_IOR为材质的折射率。

和Diffuse的区别在于反射光线和折射光线的计算,具体算法参考原文,这里不再赘述。

此处调用TraceRay时将第二个参数改为了RAY_FLAG_NONE,因为光线射入物体内部后的折射光线需要与三角形反面做相交计算,故不再使用RAY_FLAG_CULL_BACK_FACING_TRIANGLES

8.2. 最终输出

avatar

9. 失焦模糊

例程类:CameraTutorial

场景文件:9_CameraTutorialScene

算法原理参看原文,这里不再赘述。仅对DXR实现进行说明。

9.1. C#代码

FocusCamera类用于扩展Unity的Camera,为其增加了focusDistanceaperture参数。

thisCamera = GetComponent<Camera>();
var theta = thisCamera.fieldOfView * Mathf.Deg2Rad;
var halfHeight = math.tan(theta * 0.5f);
var halfWidth = thisCamera.aspect * halfHeight;
leftBottomCorner = transform.position + transform.forward * focusDistance -
                   transform.right * focusDistance * halfWidth -
                   transform.up * focusDistance * halfHeight;
size = new Vector2(focusDistance * halfWidth * 2.0f, focusDistance * halfHeight * 2.0f);

leftBottomCorner为Camera的Film Plane的左下角的World Space坐标。

size为Camera的Film Plane在World Space中的大小。

cmd.SetRayTracingVectorParam(_shader, FocusCameraShaderParams._FocusCameraLeftBottomCorner, focusCamera.leftBottomCorner);
cmd.SetRayTracingVectorParam(_shader, FocusCameraShaderParams._FocusCameraRight, focusCamera.transform.right);
cmd.SetRayTracingVectorParam(_shader, FocusCameraShaderParams._FocusCameraUp, focusCamera.transform.up);
cmd.SetRayTracingVectorParam(_shader, FocusCameraShaderParams._FocusCameraSize, focusCamera.size);
cmd.SetRayTracingFloatParam(_shader, FocusCameraShaderParams._FocusCameraHalfAperture, focusCamera.aperture * 0.5f);

cmd.SetRayTracingShaderPass(_shader, "RayTracing");
cmd.SetRayTracingAccelerationStructure(_shader, _pipeline.accelerationStructureShaderId,
  accelerationStructure);
cmd.SetRayTracingIntParam(_shader, _frameIndexShaderId, _frameIndex);
cmd.SetRayTracingBufferParam(_shader, _PRNGStatesShaderId, PRNGStates);
cmd.SetRayTracingTextureParam(_shader, _outputTargetShaderId, outputTarget);
cmd.SetRayTracingVectorParam(_shader, _outputTargetSizeShaderId, outputTargetSize);
cmd.DispatchRays(_shader, "CameraRayGenShader", (uint) outputTarget.rt.width,
  (uint) outputTarget.rt.height, 1, camera);

_FocusCameraLeftBottomCorner将Camera的Film Plane的左下角的World Space坐标传送到RayTrace Shader。

_FocusCameraRight, _FocusCameraUp将Camera在World Space中的右矢量和上矢量传送到RayTrace Shader。

_FocusCameraSize将Camera的Film Plane在World Space中的大小传送到RayTrace Shader。

_FocusCameraHalfApertureAperture的一半传送到RayTrace Shader。

9.2. RayTrace Shader

RayTrace Shader基本同之前小节的内容,仅仅将GenerateCameraRayWithOffset替换为了GenerateFocusCameraRayWithOffset函数。

inline void GenerateFocusCameraRayWithOffset(out float3 origin, out float3 direction, float2 apertureOffset, float2 offset)
{
  float2 xy = DispatchRaysIndex().xy + offset;
  float2 uv = xy / DispatchRaysDimensions().xy;

  float3 world = _FocusCameraLeftBottomCorner + uv.x * _FocusCameraSize.x * _FocusCameraRight + uv.y * _FocusCameraSize.y * _FocusCameraUp;
  origin = _WorldSpaceCameraPos.xyz + _FocusCameraHalfAperture * apertureOffset.x * _FocusCameraRight + _FocusCameraHalfAperture * apertureOffset.y * _FocusCameraUp;
  direction = normalize(world.xyz - origin);
}

float2 apertureOffset = GetRandomInUnitDisk(PRNGStates);
float2 offset = float2(GetRandomValue(PRNGStates), GetRandomValue(PRNGStates));
GenerateFocusCameraRayWithOffset(origin, direction, apertureOffset, offset);

算法原理同原文所述,只是部分计算移动到了C#阶段计算,参看9.1节代码。

apertureOffsetGetRandomInUnitDisk产生。

GetRandomInUnitDisk用于在单位圆上随机产生均匀分布的矢量。

9.3. 最终输出

avatar

10. 全部放到一起

源代码:https://github.com/zhing2006/GPU-Ray-Tracing-in-One-Weekend-by-Unity-2019.3 avatar avatar

About

Implement "Ray Tracing in One Weekend" by GPU in Unity 2019.3

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C# 71.4%
  • ShaderLab 24.2%
  • HLSL 4.4%