Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misalignment of Pseudo Ground Truth Albedo #7

Open
Lizb6626 opened this issue Apr 18, 2024 · 4 comments
Open

Misalignment of Pseudo Ground Truth Albedo #7

Lizb6626 opened this issue Apr 18, 2024 · 4 comments

Comments

@Lizb6626
Copy link

Thanks for excellent work!

I was rendering the diffuse color of the scene "baking_scene001/test/0000", using the provided ground truth mesh (ground_truth/baking_scene001/mesh_blender). And I found a misalignment with the corresponding pseudo_gt_albedo(ground_truth/baking_scene001/pseudo_gt_albedo).

The code I used for rendering:

import blenderproc as bproc
import numpy as np
import json
import bpy


bproc.init()

objs = bproc.loader.load_obj('ground_truth/baking_scene001/mesh_blender/mesh.obj')
obj = objs[0]
obj.set_rotation_euler([0, 0, 0])

light = bproc.types.Light()
light.set_location([2, -2, 0])
light.set_energy(300)

# Set camera
with open('blender_LDR/baking_scene001/transforms_test.json') as f:
    data = json.load(f)
    cam_pose = data['frames'][0]['transform_matrix']
    cam_pose = np.array(cam_pose)
    camera_angle_x = data['camera_angle_x']
bproc.camera.set_intrinsics_from_blender_params(lens=camera_angle_x, lens_unit='FOV')
bproc.camera.set_resolution(2048, 2048)
bproc.camera.add_camera_pose(cam_pose)

bproc.renderer.enable_normals_output()
bproc.renderer.enable_diffuse_color_output()

data = bproc.renderer.render()
diffuse = np.array(data['diffuse'])
Image.fromarray(diffuse).save('output/diffuse.png')

The pseudo ground truth albedo appears darker than my rendered results. Additionally, the pseudo ground truth albedo seems to be darker than the corresponding texture_kd map. I am unsure of the cause behind this discrepancy and would appreciate any insights you can provide.

pseudo_gt_albedo
pseudo_gt_albedo
my diffuse
my rendered albedo

@zfkuang
Copy link
Contributor

zfkuang commented Apr 19, 2024

Hi, Lizb, thanks for the question! This might be caused by the inconsistency of the color space. Try to apply gamma correction on your output to see if it will fix the problem.
Also, notice that our albedo map is generated from NVDiffRec (thus it's called "Pseudo" albedo). It is designed to provide a reference of what the albedo may look like.

@Lizb6626
Copy link
Author

Thank you for your prompt response. I still have some confusion regarding the color space. Are the provided diffuse texture maps and pseudo albedo in the sRGB color space? Additionally, in the Blender rendering settings, both the input base color and output rendering results are in sRGB space, eliminating the need for gamma correction.

Furthermore, I attempted to apply the rgb_to_srgb and srgb_to_rgb functions to my diffuse image, but neither of the results aligned with the pseudo ground truth albedo.

diffuse_srgb

diffuse_rgb

@zfkuang
Copy link
Contributor

zfkuang commented Apr 24, 2024

In this case, use the albedo maps (which is in sRGB space) as the reference, which is what we did in the supplementary.
The rendering script we used to generate the albedo maps (the camera conventions might be different):

for it, c2w in tqdm.tqdm(enumerate(camera_dict['cam_c2w'])):
        img_name = all_img_list[it] 
        if img_name not in test_img_list:
            continue
        original_c2w = c2w.clone().cpu().detach().numpy()
        
        c2w[:,1:2] *= -1
        c2w[:,2:3] *= -1

        w2c = torch.linalg.inv(c2w)
        R = w2c[None,:3,:3].to(device)
        T = w2c[None,:3,3].to(device)
        
        R_pytorch3d = R.clone().permute(0, 2, 1)
        T_pytorch3d = T.clone()
        R_pytorch3d[:, :, :2] *= -1
        T_pytorch3d[:, :2] *= -1
        
        fov = camera_dict['cam_focal'][it] # * 180 / np.pi
        focal_ratio = 1 / np.tan(fov / 2) 
        focal_ratio = focal_ratio / (FLAGS.resize / (FLAGS.resize-2*FLAGS.pad)) 
        fov = 2 * np.arctan(1 / focal_ratio)
        fov = fov * 180 / np.pi
        
        cameras = FoVPerspectiveCameras(device=device, R=R_pytorch3d, T=T_pytorch3d, fov=fov)

        raster_settings = RasterizationSettings(
            image_size=FLAGS.resize, 
            blur_radius=0.0, 
            faces_per_pixel=1, 
        )

        lights = AmbientLights(device=device)
        # Create a rasterizer using the settings
        rasterizer = MeshRasterizer(cameras=cameras, raster_settings=raster_settings)
        renderer = MeshRenderer(
            rasterizer=rasterizer,
            shader=SoftPhongShader(
                device=device, 
                cameras=cameras,
                lights=lights,
                blend_params=blend_params
            )
        )
        albedo_map = renderer(mesh.extend(len(cameras)))
        albedo_map = albedo_map.squeeze().cpu().numpy() # HxWx4
        albedo_map = albedo_map[...,:3] * albedo_map[...,3:4]
        albedo_map = (albedo_map.clip(0, 1) * 255).astype(np.uint8)
        
        # Rasterize the mesh to get the fragments
        fragments = rasterizer(mesh)

        # np.save(os.path.join(albedo_output_dir, img_name.replace(".png", ".npy")), albedo_map)
        imageio.imsave(os.path.join(albedo_output_dir, img_name), albedo_map)

@Lizb6626
Copy link
Author

Thank you. But the albedo maps don't seem to be aligned with the texture_kd map you provided. The white part of the baking can appears brighter in the texture_kd map. Is there anything wrong with the color space?

texture_kd_part
0000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants