Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How did you configure your test set? #30

Open
Lee-JaeWon opened this issue Mar 11, 2024 · 2 comments
Open

How did you configure your test set? #30

Lee-JaeWon opened this issue Mar 11, 2024 · 2 comments

Comments

@Lee-JaeWon
Copy link

Thanks for a great paper.

I was wondering, how did you evaluate the numbers like PSNR, SSIM, etc. in your paper?

My question is how many test cases did you pull out of the total number of datasets to evaluate the numbers.

I ask because it doesn't seem to be directly mentioned in the paper.

@inspirelt
Copy link
Collaborator

Thanks. We follow the common configurations for those without an official test split: select 1 frame from every 8 frames. For BungeeNeRF, we choose the first 30 frames as test set. Details in

if eval:
if lod>0:
print(f'using lod, using eval')
if lod < 50:
train_cam_infos = [c for idx, c in enumerate(cam_infos) if idx > lod]
test_cam_infos = [c for idx, c in enumerate(cam_infos) if idx <= lod]
print(f'test_cam_infos: {len(test_cam_infos)}')
else:
train_cam_infos = [c for idx, c in enumerate(cam_infos) if idx <= lod]
test_cam_infos = [c for idx, c in enumerate(cam_infos) if idx > lod]
else:
train_cam_infos = [c for idx, c in enumerate(cam_infos) if idx % llffhold != 0]
test_cam_infos = [c for idx, c in enumerate(cam_infos) if idx % llffhold == 0]
.

@Torment123
Copy link

Hi, I have a followup question for this: I see that the appearance embedding is constructed based on the number of views in train cameras, and when shifting to eval mode, the uid of the test camera is directly used to query the learned embedding.

If I understand the appearance embedding correctly, it is set up so that the view-dependent effect can be better encoded; but since the test cameras and train cameras have different views, and so their uids have different meaning in this aspect, I think querying the same learned embedding would lead to wrong effect ? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants