Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAE computation is different from Ref-NeRF #7

Closed
sjj118 opened this issue Oct 14, 2023 · 6 comments
Closed

MAE computation is different from Ref-NeRF #7

sjj118 opened this issue Oct 14, 2023 · 6 comments

Comments

@sjj118
Copy link

sjj118 commented Oct 14, 2023

Hi, I have been running experiments on Shiny Blender recently, and I want to compare my results with yours. But I find that your MAE computation is different from Ref-NeRF.

Ref-NeRF averages MAE weighted on alpha*acc:
https://github.com/google-research/multinerf/blob/5b4d4f64608ec8077222c52fdf814d40acc10bc1/internal/ref_utils.py#L45-L50
https://github.com/google-research/multinerf/blob/5b4d4f64608ec8077222c52fdf814d40acc10bc1/eval.py#L156-L163

But you average MAE on all pixels.

nmf/renderer.py

Lines 375 to 376 in 3eb6039

norm_err *= test_dataset.acc_maps[im_idx].squeeze(-1)
norm_errs.append(norm_err.mean())

Since transparent pixels' MAE is 0, final results would be much smaller than Ref-NeRF's.

@half-potato
Copy link
Owner

half-potato commented Oct 14, 2023

Thanks for bringing this to my attention. I will recalculate the values and update the paper.

New values:
blender_dataset norm_err: 20.952093958854675
shiny_dataset norm_err: 6.060845931371053
(updated)

I'll also fix the NVDiffRec and NVDiffRecMC values.

@half-potato
Copy link
Owner

I have updated the arxiv paper.

@sjj118
Copy link
Author

sjj118 commented Oct 18, 2023

Thanks for your update.

I have encountered another problem when reproducing the experiment on helmet. I run the experiment with:
python train.py -m expname=v38_noupsample model=microfacet_tensorf2 dataset=helmet vis_every=5000 datadir={dataset dir}

But it seems to fail to learn the correct geometry and normal vector.
083
I'm not very clear on whether the novel view synthetic results in Table 1 were trained on HDR or original images from Shiny Blender. Would the use of HDR images influence the results?

@half-potato
Copy link
Owner

This is probably caused by the incorrect mixing mode. Can you check the output config.yaml to see if the diffuse mixing mode is set to "fresnel"?

diffuse_mixing_mode: "fresnel"

If it isn't you can set this by passing model.arch.model.diffuse_mixing_mode="fresnel"

@sjj118
Copy link
Author

sjj118 commented Oct 19, 2023

I'm using the latest version of code, where the diffuse_mixing_mode is set "fresnel" by default.

By comparing the output config.yaml with your relighting experiment's config.yaml, I have found that only by setting field.smoothing to 1, can the correct normals be obtained.

The command I use:
python train.py -m expname=smoothing model=microfacet_tensorf2 dataset=helmet field.smoothing=1 vis_every=5000 datadir={dataset dir}
092

@half-potato
Copy link
Owner

half-potato commented Oct 19, 2023

Thanks for the help! I have made this the default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants