Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add V-Ray CPU to the render fidelity test #4483

Closed
bhouston opened this issue Sep 28, 2023 · 26 comments · Fixed by #4487
Closed

Add V-Ray CPU to the render fidelity test #4483

bhouston opened this issue Sep 28, 2023 · 26 comments · Fixed by #4487

Comments

@bhouston
Copy link
Contributor

bhouston commented Sep 28, 2023

Description

We should add V-Ray to the model-viewer render fidelity page. This will encourage improved interoperability of the Khronos PBR BDSF with the V-Ray ecosystem.

There is an existing Python project for rendering glTFs in V-Ray here: https://github.com/ChaosGroup/vray_gltf. It is a couple years old so I believe it doesn't support the latest extensions, but maybe we can start with it.

We should likely take this existing python project and modify it to load the relevant models, cameras and environments used by the model-viewer fidelity test framework, and then submit the updated code and relevant images to this repository, allowing us to see what works and what doesn't work in V-Ray.

This can then drive further updates to the V-Ray test script (probably adding support for modern glTF extensions) or to this integration test suite.

I am willing to fund this on behalf of Threekit.

Live Demo

Not relevant.

Version

Not relevant.

Browser Affected

Not relevant.

OS

Not relevant.

AR

Not relevant.

@bhouston bhouston changed the title Add V-Ray renders to the glTF ModelViewer fidelity test page. Add V-Ray CPU to the render fidelity test Sep 28, 2023
@bhouston
Copy link
Contributor Author

bhouston commented Sep 28, 2023

Acceptance criteria:

  • A PR is produced, although getting it accepted is not required.
  • That PR should includes all of the source code and a clear README for reproducing this, excluding the V-Ray SDK which is proprietary. This allows for others to both confirm and develop upon this solution in the future.
  • It is expected that all examples are rendered (e.g. images produced), there shouldn't be any skipped results.
  • While we can not expect perfect reproduction in this first go around for all features, we expect at minimum that the render size, aspect ratio, color space of the output render, color spaces of the textures, environment map, basic lighting, camera aspect/position/rotation, and the glTF geometry and textures are all loading and are correct.
  • What is acceptable to not be correct is (1) unsupported gltf extensions that are not yet implemented in vray_gltf - these are extensions created in the last two years, (2) incorrectly rendered material effects that are not supported correctly by the existing code in vray_gltf.
  • It is expect that most of the materials will render relatively close to other engine results on the Renderer Fidelity page excluding the advanced effects. It would be surprising if everything was completely off.

@jasondavies
Copy link
Contributor

I'll have a go at this.

@jasondavies
Copy link
Contributor

Any idea what the default camera settings should be when not specified by the glTF model?

The following options are supported by vray_gltf:

  --default_cam_look_at DEFAULT_CAM_LOOK_AT
                        Camera look at (x,y,z)
  --default_cam_rot DEFAULT_CAM_ROT
                        Default camera rotation(degrees) (x,y,z) or "(x,y,z)" around the avarage object position of the scene, other brackets work too
  --default_cam_moffset DEFAULT_CAM_MOFFSET
                        Default camera multiplier offset (x,y,z) or "(x,y,z)", all brackets will work
  --default_cam_pos DEFAULT_CAM_POS
                        Default camera Pos (x,y,z) or "(x,y,z)", other default cam still work but relative on this position
  --default_cam_fov DEFAULT_CAM_FOV
                        Default camera FOV is degrees
  --default_cam_zoom DEFAULT_CAM_ZOOM
                        Default camera Zoom -inf to 1.0 as 1.0 max zoom
  --default_cam_view DEFAULT_CAM_VIEW
                        Default camera view, one of front, back, left, right, top, bottom or auto

@jasondavies
Copy link
Contributor

I'm guessing the orbit property defines the camera rotation, for each scenario.

@bhouston
Copy link
Contributor Author

I notice there are other renderer specific hardnesses that seem to have camera code in them, like this one:

https://github.com/google/model-viewer/blob/master/packages/render-fidelity-tools/src/components/renderers/rhodonite-viewer.ts#L162

I suspect that is a guide to calculating the camera matrix from the harness information?

I think both V-Ray and Blender Cycles are problematic to integrate into this test suite. Maybe one could create a mini-web server that just has a post that the harness calls with the information and then that single route passes the information to python to do the render and returns the result? Then it still run within the harness?

@bhouston
Copy link
Contributor Author

I could write a webserver tomorrow morning for this purpose? Would that help you?

@jasondavies
Copy link
Contributor

The most important issue right now is translating the scenario camera information to V-Ray's transform, which I believe is a rotation matrix and a translation matrix. I'm going to bed now, but will look at this tomorrow morning.

Regarding the test harness, let's figure that out once we have at least one render scenario working. I was planning to use the offline rendering path anyway for the initial version.

@bhouston
Copy link
Contributor Author

The code I linked to in my earlier comment calculates the matrix position (eye) and direction/up manually. From direction and up you get right/left via cross product and then you can use that to set the basis.

@jasondavies
Copy link
Contributor

Success:

test

Compare to three-gpu-pathtracer-golden:

three-gpu-pathtracer-golden

@jasondavies
Copy link
Contributor

jasondavies commented Sep 29, 2023

Note that vray_gltf supports a fairly limited set of extensions:

Textures:

  • KHR_texture_transform

Materials:

  • KHR_materials_pbrSpecularGlossiness
  • KHR_materials_transmission
  • KHR_materials_clearcoat
  • KHR_materials_sheen

Lights:

  • KHR_lights_punctual

@jasondavies
Copy link
Contributor

Using the above extension list, it seems the following scenarios are unsupported (list of unsupported extensions in brackets):

  • khronos-DragonAttenuation (KHR_materials_volume, KHR_materials_variants)
  • khronos-IridescentDishWithOlives (KHR_materials_ior, KHR_materials_iridescence, KHR_materials_volume)
  • khronos-SheenChair (KHR_materials_variants)
  • khronos-MaterialsVariantsShoe (KHR_materials_variants)
  • khronos-ABeautifulGame (KHR_materials_volume)
  • khronos-IridescenceLamp (KHR_materials_volume, KHR_materials_iridescence)
  • khronos-MosquitoInAmber (KHR_materials_ior, KHR_materials_volume)
  • khronos-EmissiveStrengthTest (KHR_materials_emissive_strength)
  • khronos-IridescenceDielectricSpheres (KHR_materials_ior, KHR_materials_iridescence)
  • khronos-IridescenceMetallicSpheres (KHR_materials_ior, KHR_materials_iridescence)
  • khronos-IridescenceSuzanne (KHR_materials_ior, KHR_materials_volume, KHR_materials_iridescence)
  • khronos-GlamVelvetSofa (KHR_materials_specular, KHR_materials_variants)
  • khronos-TransmissionTest (KHR_xmp)
  • khronos-TransmissionRoughnessTest (KHR_materials_ior, KHR_materials_volume)
  • khronos-AttenuationTest (KHR_materials_volume)
  • khronos-SpecularTest (KHR_materials_specular)
  • khronos-TextureTransformMultiTest (KHR_materials_unlit)
  • khronos-UnlitTest (KHR_materials_unlit)

@bhouston
Copy link
Contributor Author

The KHR_material_variants isn't really a visual extension, rather it is about the loader switching the glTF materials or nodes around.

These are visual ones that are missing in order of descending importance:

I personally think this one isn't important and it also may not be feasible in most pathtracers, rather implementing it is figuring out how to get around all of the path tracer features:

@bhouston
Copy link
Contributor Author

Amazing work! I notice the background is different.

How different are the images that do not use the unsupported extensions?

I would love to see how close is the match when we know all features are supposed to be supported?

@bhouston
Copy link
Contributor Author

I think the HDR light intensity or the tone mapping is different as well. I wonder if it is even possible to have the tone mapping match? I suspect it may not be possible as the tone mapping operators in V-Ray may different from those available in Three.js.

@jasondavies
Copy link
Contributor

The render above was done without any HDR environment (looks like most scenarios should use lightroom_14b.hdr if nothing is specified). The background colour was originally black by default (but now fixed to white).

I managed to load lightroom_14b.hdr and convert to V-Ray's LightDome but I think it's not working quite right yet (maybe incorrect intensity?):

test

@bhouston
Copy link
Contributor Author

I think that usually in these tests the HDR intensity isn't adjusted and often it just uses what is in the file directly. Output intensities may be affected by the tone mapping?

@bhouston
Copy link
Contributor Author

I also notice now that the very recent KHR_materials_anisotropy isn't supported by either the fidelity test framework or the vray_gltf project.

@jasondavies
Copy link
Contributor

OK, finally figured out the UVW transform required for the default HDR. Compare:

V-Ray:

vray-golden

Stellar:

stellar-golden

It still feels like intensity is off, but at least it's oriented correctly.

@jasondavies
Copy link
Contributor

Here's the command I'm currently generating for khronos-Fox:

python3 main.py /Users/jason/tmp/model-viewer/packages/shared-assets/models/glTF-Sample-Models/2.0/Fox/glTF/Fox.gltf --output_file /Users/jason/tmp/model-viewer/packages/render-fidelity-tools/test/goldens/khronos-Fox/vray-golden.png --render_mode production --default_cam_rot '(-60,0,0)' --default_cam_look_at '(-35,37,25)' --default_cam_pos '(0,0,124)' --size '(1536,1536)' --num_frames 1

Here's the JSON config:

    {
      "name": "khronos-Fox",
      "model": "../../../shared-assets/models/glTF-Sample-Models/2.0/Fox/glTF/Fox.gltf",
      "target": {
        "y": 37,
        "x": -35,
        "z": 25
      },
      "orbit": {
        "theta": -60,
        "radius": 124
      },
      "exclude": [
        "stellar"
      ]
    }

I think there's an issue with --default_cam_look_at (or how I'm using it).

You'll need to use my vray_gltf branch.

@bsdorra
Copy link
Contributor

bsdorra commented Sep 29, 2023

A while ago I added some code to the render-fidelity-tools that allows for an easy integration of external renderers with the existing pipeline. It's being used to generate the path-traced DS Stellar renderings.

An external renderer can simply be added to the renderer list in the test config like this:

{
  "renderers": [
    {
      "name": "stellar",
      "description": "Dassault Systèmes STELLAR",
      "command": {
        "executable": "python",
        "args": [
          "test/renderers/stellar/stellar.py"
        ]
      }
    }
  ]
}

On `npm run update-screenshots', the test suite calls the provided command with a single argument, that is the stringified json object containing the scenario config (the same info that is provided to the web rendering backends).

The python main then looks looks like this

def main():
    """cmd render script

      ARGS:
          argv[1] (str): The stringified json object containing scenario config and outputPath properties
          Example:
          {
            "scenario": {
              "lighting": "../../../shared-assets/environments/lightroom_14b.hdr",
              "dimensions": {
                "width": 768,
                "height": 450
              },
              "target": {
                "x": 0,
                "y": 0.3,
                "z": 0
              },
              "orbit": {
                "theta": 0,
                "phi": 90,
                "radius": 1
              },
              "verticalFoV": 45,
              "renderSkybox": False,
              "name": "khronos-SheenChair",
              "model": "../../../shared-assets/models/glTF-Sample-Models/2.0/SheenChair/glTF-Binary/SheenChair.glb"
            },
            "outputFile": "../../../test/goldens/khronos-SheenChair/stellar-golden.png"
          }
    """
    config = json.loads(sys.argv[1])

    scenario = config["scenario"]
    outpath = config["outputFile"]

    # parse scenario
    resolution = (scenario["dimensions"]["width"], scenario["dimensions"]["height"])
    scenePath = "shared-assets" + scenario["model"].split("shared-assets")[1]
    iblPath = "shared-assets" + scenario["lighting"].split("shared-assets")[1]
    renderSkybox = scenario["renderSkybox"]

    target = np.array([scenario["target"]["x"], scenario["target"]["y"], scenario["target"]["z"]])
    theta = scenario["orbit"]["theta"]
    phi = scenario["orbit"]["phi"]
    radius = scenario["orbit"]["radius"]
    verticalFov = scenario["verticalFoV"]
    aspect = resolution[0]/resolution[1]

    # setup scene
    scene = load_scene(scenePath)
    camera = create_camera(scene, target, verticalFov, aspect, theta, phi, radius)
    ibl = create_hdri_light(scene, iblPath, renderSkybox)

    #render
    beauty_image = render_scene(config, scene, renderer, camera, ibl, NUM_SAMPLES)
   
    # tonemap
    beauty_image[:,:,:3] *= 1.0 / 0.6;
    beauty_image[:,:,:3] = ACESFilmicToneMapping(beauty_image[:,:,:3])
    # gamma
    beauty_image[:,:,:3] = np.power(np.clip(beauty_image[:,:,:3], 0.0, 0.9999), 1.0/2.2)

    if renderSkybox:
      beauty_ldr = (beauty_image[:, :, :3] * 255).astype(np.uint8)
    else:
      beauty_ldr = (beauty_image * 255).astype(np.uint8)

    save_image(os.path.join("./", outpath), beauty_ldr)

Test scenarios can be disabled for a renderer by adding the renderer's name to the scenario's exclude list in the test config. E.g.

    {
      "name": "khronos-IridescenceDielectricSpheres",
      "model": "../../../shared-assets/models/glTF-Sample-Models/2.0/IridescenceDielectricSpheres/glTF/IridescenceDielectricSpheres.gltf",
      "orbit": {
        "radius": 50,
        "theta": 45,
        "phi": 65
      },
      "target": {
        "y": -3
      },
      "dimensions": {
        "height": 700,
        "width": 700
      },
      "exclude": [
        "stellar"
      ]
    }

@bhouston
Copy link
Contributor Author

@jasondavies I notice the ACESFilmicToneMapping python call in @bsdorra's script above. We likely have to have V-Ray render to a linear HDR image and then apply that tone mapping as he did. I think it can be hard to have V-Ray do this properly.

@bhouston
Copy link
Contributor Author

@bsdorra do you have the code available for the ACESFilmicToneMapping function you used?

jasondavies added a commit to jasondavies/model-viewer that referenced this issue Sep 29, 2023
This vendors <https://github.com/ChaosGroup/vray_gltf> for simplicity.

You can run `npm run update-screenshots vray` to avoid re-running all of
the other screenshot generators.

Requires a V-Ray license and the V-Ray SDK on PYTHONPATH.

Note: no goldens included in this commit as they are poor quality.

Fixes google#4483.
@bsdorra
Copy link
Contributor

bsdorra commented Sep 29, 2023

@bhouston sure

def RRTAndODTFit( v: np.array ):
  a = v * ( v + 0.0245786 ) - 0.000090537;
  b = v * ( 0.983729 * v + 0.4329510 ) + 0.238081;
  return a / b;

def ACESFilmicToneMapping(img: np.array) -> np.array:
  """Using the same tonemapping function as three.js and glTF Sample Viewer
  https://github.com/mrdoob/three.js/blob/dev/examples/jsm/shaders/ACESFilmicToneMappingShader.js

  Args:
      img (np.array): The HDR image to be tonemapped

  Returns:
      np.array: The tonemapped LDR image
  """
  # sRGB => XYZ => D65_2_D60 => AP1 => RRT_SAT
  ACESInputMat = np.array([
    [0.59719, 0.07600, 0.02840], # transposed from source
    [0.35458, 0.90834, 0.13383],
    [0.04823, 0.01566, 0.83777],
  ])

  # ODT_SAT => XYZ => D60_2_D65 => sRGB
  ACESOutputMat = np.array([
    [ 1.60475, -0.10208, -0.00327], # transposed from source
    [-0.53108,  1.10813, -0.07276],
    [-0.07367, -0.00605,  1.07602],
  ])

  img = img@ACESInputMat
  img = RRTAndODTFit( img );
  img = img@ACESOutputMat

  return img;

And there's this additional pre-tonemapping scale factor of exposure/0.6 (where exposure defaults to 1.0), as explained in the three.js shader

@jasondavies
Copy link
Contributor

jasondavies commented Sep 30, 2023

@bsdorra This is great, thanks! Is there any chance you can also share:

  • create_camera (I'm having difficulty with vray_gltf's camera code).
  • save_image Edit: I've implemented this.

I'm tempted to ask for create_hdri_light too, just in case it helps. Thanks!

@bsdorra
Copy link
Contributor

bsdorra commented Sep 30, 2023

@jasondavies Here's the code that I use to calculate relevant cam params

target = np.array([scenario["target"]["x"], scenario["target"]["y"], scenario["target"]["z"]])
theta = scenario["orbit"]["theta"]
phi = scenario["orbit"]["phi"]
radius = scenario["orbit"]["radius"]
fovy = scenario["verticalFoV"]
aspect = resolution[0]/resolution[1]

theta = theta * math.pi / 180;
phi = phi * math.pi / 180;
radius_sin_phi = radius * math.sin(phi);
cam_pos = [
  radius_sin_phi * math.sin(theta) + target[0],
  radius * math.cos(phi) +  target[1], #y-up
  radius_sin_phi * math.cos(theta) +  target[2],
]

center = np.array([target[0], target[1], target[2]]);

if (radius <= 0):
  center[0] = cam_pos[0] - math.sin(phi) * math.sin(theta)
  center[1] = cam_pos[1] - math.cos(phi) 
  center[2] = cam_pos[2] - math.sin(phi) * math.cos(theta)

view_matrix = look_at(cam_pos, center, up) 
focal_length = np.linalg.norm(center-cam_pos)

sensor_height = math.tan(fovy*0.5* math.pi / 180.0) * focal_length * 2.0
sensor_width = sensor_height * aspect

create_hdr_light won't help, it just forwards the IBL image path to the renderer's loading function. But there's nothing special going on in there, just reading the image data and storing it as texture for rendering.

If necessary, pre-loading image data into an array is straight forward with imageio

data = imageio.imread(ibl_path, format='HDR-FI')

jasondavies added a commit to jasondavies/model-viewer that referenced this issue Sep 30, 2023
This vendors <https://github.com/ChaosGroup/vray_gltf> for simplicity.

You can run `npm run update-screenshots vray` to avoid re-running all of
the other screenshot generators.

Requires a V-Ray license and the V-Ray SDK on PYTHONPATH.

Fixes google#4483.
jasondavies added a commit to jasondavies/model-viewer that referenced this issue Oct 1, 2023
This vendors <https://github.com/ChaosGroup/vray_gltf> for simplicity.

You can run `npm run update-screenshots vray` to avoid re-running all of
the other screenshot generators.

Requires a V-Ray license and the V-Ray SDK on PYTHONPATH.

Fixes google#4483.
jasondavies added a commit to jasondavies/model-viewer that referenced this issue Oct 2, 2023
You can run `npm run update-screenshots vray` to avoid re-running all of
the other screenshot generators.

Requires a V-Ray license and the V-Ray SDK on PYTHONPATH.

Fixes google#4483.
@bhouston
Copy link
Contributor Author

bhouston commented Oct 2, 2023

Thank you @bsdorra for your help on this issue! It wasn't possible to achieve these quality results without your example code!

elalish pushed a commit that referenced this issue Oct 3, 2023
* Add vendored copy of vray_gltf.

Relevant portions of code included from:

  <ChaosGroup/vray_gltf@63eb33c>

Submitted on behalf of a third-party: Chaos Software OOD.

License: MIT (included in commit).

* Add V-Ray CPU support to render fidelity tests.

You can run `npm run update-screenshots vray` to avoid re-running all of
the other screenshot generators.

Requires a V-Ray license and the V-Ray SDK on PYTHONPATH.

Fixes #4483.
JL-Vidinoti pushed a commit to vidinoti/model-viewer that referenced this issue Apr 22, 2024
* Add vendored copy of vray_gltf.

Relevant portions of code included from:

  <ChaosGroup/vray_gltf@63eb33c>

Submitted on behalf of a third-party: Chaos Software OOD.

License: MIT (included in commit).

* Add V-Ray CPU support to render fidelity tests.

You can run `npm run update-screenshots vray` to avoid re-running all of
the other screenshot generators.

Requires a V-Ray license and the V-Ray SDK on PYTHONPATH.

Fixes google#4483.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants