New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation set used in paper? #3
Comments
@jasonyzhang Sorry for the follow up message! Could you also share more about the quantative evaluation details (e.g., which view is held out) that would be super helpful for reproducing and comaprions, thanks! |
Hi, I'm working on releasing it asap! Will post the code, models, and splits to recreate the numbers, hopefully by end of the week. |
Thanks so much for your help! really appreciate it;) |
Hi, Sorry for the delay! I've now posted all the data for evaluation, which includes the off-the-shelf camera (pre-processed to minimize re-projection error between the template car mesh and the mask) and the optimized cameras (which have also been processed with some manual input). The data also includes the rendered views from NeRS in the NVS evaluation protocol. I show how to replicate the numbers using the rendered views as well. Please let me know if you encounter any issues! |
Hi @jasonyzhang, Thanks for the update!! really appreciate it! I could reproduce the numbers using provided evaluation protocol. I noticed that if we use the clean-fid to compute the FID scores, the numbers are inconsistent with papers.
I guess you are using pytorch-fid to compute FID scores in the paper. Would you mind sharing the clean FID scores for all the baseline models? Thanks a lot! |
Sorry, another question is that from the eval code, seems that the evaluation is done on all views (both training views and a held-out view) is it the correct setting? I thought that we should only eval on the novel views? |
Hi @jasonyzhang, Annother question is that retraining results obtained by running Is it due to different hyperparameters? Thanks a lot for your great help in advance! |
Hi, Re: FID
The FID for ners_fixed in the paper was 60.9, so only slightly off. Re: Evaluation protocol. Re: blurry results. |
I see I see, thanks a lot for the detailed reply! really appreciate it ;) |
@jasonyzhang Thanks for the awesome work and public code!! In this paper, 20 actors in MVMC dataset are used for quantative evaluation, could you please share the actor ids on evalution set for easier comparisons?
Thanks for your great help!
The text was updated successfully, but these errors were encountered: