-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add V-Ray CPU to the render fidelity test #4483
Comments
Acceptance criteria:
|
I'll have a go at this. |
Any idea what the default camera settings should be when not specified by the glTF model? The following options are supported by
|
I'm guessing the |
I notice there are other renderer specific hardnesses that seem to have camera code in them, like this one: I suspect that is a guide to calculating the camera matrix from the harness information? I think both V-Ray and Blender Cycles are problematic to integrate into this test suite. Maybe one could create a mini-web server that just has a post that the harness calls with the information and then that single route passes the information to python to do the render and returns the result? Then it still run within the harness? |
I could write a webserver tomorrow morning for this purpose? Would that help you? |
The most important issue right now is translating the scenario camera information to V-Ray's transform, which I believe is a rotation matrix and a translation matrix. I'm going to bed now, but will look at this tomorrow morning. Regarding the test harness, let's figure that out once we have at least one render scenario working. I was planning to use the offline rendering path anyway for the initial version. |
The code I linked to in my earlier comment calculates the matrix position (eye) and direction/up manually. From direction and up you get right/left via cross product and then you can use that to set the basis. |
Note that Textures:
Materials:
Lights:
|
Using the above extension list, it seems the following scenarios are unsupported (list of unsupported extensions in brackets):
|
The KHR_material_variants isn't really a visual extension, rather it is about the loader switching the glTF materials or nodes around. These are visual ones that are missing in order of descending importance:
I personally think this one isn't important and it also may not be feasible in most pathtracers, rather implementing it is figuring out how to get around all of the path tracer features: |
Amazing work! I notice the background is different. How different are the images that do not use the unsupported extensions? I would love to see how close is the match when we know all features are supposed to be supported? |
I think the HDR light intensity or the tone mapping is different as well. I wonder if it is even possible to have the tone mapping match? I suspect it may not be possible as the tone mapping operators in V-Ray may different from those available in Three.js. |
The render above was done without any HDR environment (looks like most scenarios should use I managed to load |
I think that usually in these tests the HDR intensity isn't adjusted and often it just uses what is in the file directly. Output intensities may be affected by the tone mapping? |
I also notice now that the very recent KHR_materials_anisotropy isn't supported by either the fidelity test framework or the vray_gltf project. |
Here's the command I'm currently generating for python3 main.py /Users/jason/tmp/model-viewer/packages/shared-assets/models/glTF-Sample-Models/2.0/Fox/glTF/Fox.gltf --output_file /Users/jason/tmp/model-viewer/packages/render-fidelity-tools/test/goldens/khronos-Fox/vray-golden.png --render_mode production --default_cam_rot '(-60,0,0)' --default_cam_look_at '(-35,37,25)' --default_cam_pos '(0,0,124)' --size '(1536,1536)' --num_frames 1 Here's the JSON config: {
"name": "khronos-Fox",
"model": "../../../shared-assets/models/glTF-Sample-Models/2.0/Fox/glTF/Fox.gltf",
"target": {
"y": 37,
"x": -35,
"z": 25
},
"orbit": {
"theta": -60,
"radius": 124
},
"exclude": [
"stellar"
]
} I think there's an issue with You'll need to use my vray_gltf branch. |
A while ago I added some code to the render-fidelity-tools that allows for an easy integration of external renderers with the existing pipeline. It's being used to generate the path-traced DS Stellar renderings. An external renderer can simply be added to the renderer list in the test config like this: {
"renderers": [
{
"name": "stellar",
"description": "Dassault Systèmes STELLAR",
"command": {
"executable": "python",
"args": [
"test/renderers/stellar/stellar.py"
]
}
}
]
} On `npm run update-screenshots', the test suite calls the provided command with a single argument, that is the stringified json object containing the scenario config (the same info that is provided to the web rendering backends). The python main then looks looks like this def main():
"""cmd render script
ARGS:
argv[1] (str): The stringified json object containing scenario config and outputPath properties
Example:
{
"scenario": {
"lighting": "../../../shared-assets/environments/lightroom_14b.hdr",
"dimensions": {
"width": 768,
"height": 450
},
"target": {
"x": 0,
"y": 0.3,
"z": 0
},
"orbit": {
"theta": 0,
"phi": 90,
"radius": 1
},
"verticalFoV": 45,
"renderSkybox": False,
"name": "khronos-SheenChair",
"model": "../../../shared-assets/models/glTF-Sample-Models/2.0/SheenChair/glTF-Binary/SheenChair.glb"
},
"outputFile": "../../../test/goldens/khronos-SheenChair/stellar-golden.png"
}
"""
config = json.loads(sys.argv[1])
scenario = config["scenario"]
outpath = config["outputFile"]
# parse scenario
resolution = (scenario["dimensions"]["width"], scenario["dimensions"]["height"])
scenePath = "shared-assets" + scenario["model"].split("shared-assets")[1]
iblPath = "shared-assets" + scenario["lighting"].split("shared-assets")[1]
renderSkybox = scenario["renderSkybox"]
target = np.array([scenario["target"]["x"], scenario["target"]["y"], scenario["target"]["z"]])
theta = scenario["orbit"]["theta"]
phi = scenario["orbit"]["phi"]
radius = scenario["orbit"]["radius"]
verticalFov = scenario["verticalFoV"]
aspect = resolution[0]/resolution[1]
# setup scene
scene = load_scene(scenePath)
camera = create_camera(scene, target, verticalFov, aspect, theta, phi, radius)
ibl = create_hdri_light(scene, iblPath, renderSkybox)
#render
beauty_image = render_scene(config, scene, renderer, camera, ibl, NUM_SAMPLES)
# tonemap
beauty_image[:,:,:3] *= 1.0 / 0.6;
beauty_image[:,:,:3] = ACESFilmicToneMapping(beauty_image[:,:,:3])
# gamma
beauty_image[:,:,:3] = np.power(np.clip(beauty_image[:,:,:3], 0.0, 0.9999), 1.0/2.2)
if renderSkybox:
beauty_ldr = (beauty_image[:, :, :3] * 255).astype(np.uint8)
else:
beauty_ldr = (beauty_image * 255).astype(np.uint8)
save_image(os.path.join("./", outpath), beauty_ldr) Test scenarios can be disabled for a renderer by adding the renderer's name to the scenario's exclude list in the test config. E.g. {
"name": "khronos-IridescenceDielectricSpheres",
"model": "../../../shared-assets/models/glTF-Sample-Models/2.0/IridescenceDielectricSpheres/glTF/IridescenceDielectricSpheres.gltf",
"orbit": {
"radius": 50,
"theta": 45,
"phi": 65
},
"target": {
"y": -3
},
"dimensions": {
"height": 700,
"width": 700
},
"exclude": [
"stellar"
]
} |
@jasondavies I notice the ACESFilmicToneMapping python call in @bsdorra's script above. We likely have to have V-Ray render to a linear HDR image and then apply that tone mapping as he did. I think it can be hard to have V-Ray do this properly. |
@bsdorra do you have the code available for the ACESFilmicToneMapping function you used? |
This vendors <https://github.com/ChaosGroup/vray_gltf> for simplicity. You can run `npm run update-screenshots vray` to avoid re-running all of the other screenshot generators. Requires a V-Ray license and the V-Ray SDK on PYTHONPATH. Note: no goldens included in this commit as they are poor quality. Fixes google#4483.
@bhouston sure def RRTAndODTFit( v: np.array ):
a = v * ( v + 0.0245786 ) - 0.000090537;
b = v * ( 0.983729 * v + 0.4329510 ) + 0.238081;
return a / b;
def ACESFilmicToneMapping(img: np.array) -> np.array:
"""Using the same tonemapping function as three.js and glTF Sample Viewer
https://github.com/mrdoob/three.js/blob/dev/examples/jsm/shaders/ACESFilmicToneMappingShader.js
Args:
img (np.array): The HDR image to be tonemapped
Returns:
np.array: The tonemapped LDR image
"""
# sRGB => XYZ => D65_2_D60 => AP1 => RRT_SAT
ACESInputMat = np.array([
[0.59719, 0.07600, 0.02840], # transposed from source
[0.35458, 0.90834, 0.13383],
[0.04823, 0.01566, 0.83777],
])
# ODT_SAT => XYZ => D60_2_D65 => sRGB
ACESOutputMat = np.array([
[ 1.60475, -0.10208, -0.00327], # transposed from source
[-0.53108, 1.10813, -0.07276],
[-0.07367, -0.00605, 1.07602],
])
img = img@ACESInputMat
img = RRTAndODTFit( img );
img = img@ACESOutputMat
return img; And there's this additional pre-tonemapping scale factor of |
@bsdorra This is great, thanks! Is there any chance you can also share:
I'm tempted to ask for create_hdri_light too, just in case it helps. Thanks! |
@jasondavies Here's the code that I use to calculate relevant cam params target = np.array([scenario["target"]["x"], scenario["target"]["y"], scenario["target"]["z"]])
theta = scenario["orbit"]["theta"]
phi = scenario["orbit"]["phi"]
radius = scenario["orbit"]["radius"]
fovy = scenario["verticalFoV"]
aspect = resolution[0]/resolution[1]
theta = theta * math.pi / 180;
phi = phi * math.pi / 180;
radius_sin_phi = radius * math.sin(phi);
cam_pos = [
radius_sin_phi * math.sin(theta) + target[0],
radius * math.cos(phi) + target[1], #y-up
radius_sin_phi * math.cos(theta) + target[2],
]
center = np.array([target[0], target[1], target[2]]);
if (radius <= 0):
center[0] = cam_pos[0] - math.sin(phi) * math.sin(theta)
center[1] = cam_pos[1] - math.cos(phi)
center[2] = cam_pos[2] - math.sin(phi) * math.cos(theta)
view_matrix = look_at(cam_pos, center, up)
focal_length = np.linalg.norm(center-cam_pos)
sensor_height = math.tan(fovy*0.5* math.pi / 180.0) * focal_length * 2.0
sensor_width = sensor_height * aspect
If necessary, pre-loading image data into an array is straight forward with imageio data = imageio.imread(ibl_path, format='HDR-FI') |
This vendors <https://github.com/ChaosGroup/vray_gltf> for simplicity. You can run `npm run update-screenshots vray` to avoid re-running all of the other screenshot generators. Requires a V-Ray license and the V-Ray SDK on PYTHONPATH. Fixes google#4483.
This vendors <https://github.com/ChaosGroup/vray_gltf> for simplicity. You can run `npm run update-screenshots vray` to avoid re-running all of the other screenshot generators. Requires a V-Ray license and the V-Ray SDK on PYTHONPATH. Fixes google#4483.
You can run `npm run update-screenshots vray` to avoid re-running all of the other screenshot generators. Requires a V-Ray license and the V-Ray SDK on PYTHONPATH. Fixes google#4483.
Thank you @bsdorra for your help on this issue! It wasn't possible to achieve these quality results without your example code! |
* Add vendored copy of vray_gltf. Relevant portions of code included from: <ChaosGroup/vray_gltf@63eb33c> Submitted on behalf of a third-party: Chaos Software OOD. License: MIT (included in commit). * Add V-Ray CPU support to render fidelity tests. You can run `npm run update-screenshots vray` to avoid re-running all of the other screenshot generators. Requires a V-Ray license and the V-Ray SDK on PYTHONPATH. Fixes #4483.
* Add vendored copy of vray_gltf. Relevant portions of code included from: <ChaosGroup/vray_gltf@63eb33c> Submitted on behalf of a third-party: Chaos Software OOD. License: MIT (included in commit). * Add V-Ray CPU support to render fidelity tests. You can run `npm run update-screenshots vray` to avoid re-running all of the other screenshot generators. Requires a V-Ray license and the V-Ray SDK on PYTHONPATH. Fixes google#4483.
Description
We should add V-Ray to the model-viewer render fidelity page. This will encourage improved interoperability of the Khronos PBR BDSF with the V-Ray ecosystem.
There is an existing Python project for rendering glTFs in V-Ray here: https://github.com/ChaosGroup/vray_gltf. It is a couple years old so I believe it doesn't support the latest extensions, but maybe we can start with it.
We should likely take this existing python project and modify it to load the relevant models, cameras and environments used by the model-viewer fidelity test framework, and then submit the updated code and relevant images to this repository, allowing us to see what works and what doesn't work in V-Ray.
This can then drive further updates to the V-Ray test script (probably adding support for modern glTF extensions) or to this integration test suite.
I am willing to fund this on behalf of Threekit.
Live Demo
Not relevant.
Version
Not relevant.
Browser Affected
Not relevant.
OS
Not relevant.
AR
Not relevant.
The text was updated successfully, but these errors were encountered: