-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Channel Scale in Relighting. #20
Comments
Hello @JiuTongBro , This is the script we used for the relighting comparison. Let me know if this works for you.
|
Thanks for your code. However I still can't get the relighting result in your code. I wonder how do you put this scale back to the texture map rendering in Blender? In the relighting code you provided in https://github.com/NVlabs/nvdiffrecmc/issues/14 I see no involvement of the computed scale. I wonder where do you put it back onto the scene texture. I have tried to directly multiply this scale tuple with the generated kd texture map under the predicted 'mesh/' directory, but as shown below, the relighting result using this way is too red. I also tried to pass this scale tuple to a scale node in blender to correct the color, but the result this time is not that red. By the way, the computed scale on hotdog scene is [0.81255096 0.53686862 0.35195918] in RGB channel, is it close to your result? Thanks. |
I suppose I have figured out why, the predicted ang GT albedo of each view shall all be converted to sRGB space to scale, then multiply back to the scene texture map, also in sRGB space. |
Hi, I wonder is there any difference in the relighting processing between NVDIFFREC and NVDIFFRECMC? Now I have sucessfully reimplemented your results in NVDIFFRECMC, but ran into some troubles in the relighting results of NVDIFFREC. |
Hi. Thanks for your impressive work firstly.
Recently I am trying to reimplement the relighting results in your paper. However, although my predicted albedo is almost the same as in your paper, as shown below, I can't get the corresponding metric results as yout paper provided. I followed your relighting code in https://github.com/NVlabs/nvdiffrecmc/blob/main/blender/blender.py, and I suppose the problem lies in the RGB-channel scale processing.
Since your prediceted kd is in SRGB space, but the ground truth albedo generated by Blender is seems to be in linear space. I wonder, did you scale your albedo texture in SRGB space or in linear space. And, in relighting shall I transfer the texture_kd image to linear space before I put it into Bledenr use your code?Moreover, in https://github.com/NVlabs/nvdiffrecmc/issues/14 you mentioned we can "compare raw PSNR to the reference, normalize by average intensity, or normalize color channels individually", does that mean we need to scale the relighted image, but not like in NeRFactor where we scale the albedo of each view, to get the paper results? Thans.
The text was updated successfully, but these errors were encountered: