Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in model evaluation—datasets with more than 1000 images have very low PSNRs. Easy fix. #5

Open
tristanengst opened this issue May 13, 2024 · 0 comments

Comments

@tristanengst
Copy link

The render_test_evaluation() function in train_nvfi() writes generated images to disk with files names r_some_number.png where the some_number indicates the position of the corresponding target in all_targets. However, the read_images_in_dir() function sorts these images by their filenames as strings and not numbers, and hence there is a mismatch between the ordering of generated examples and targets.

This shouldn't be impactful if there are fewer than 1000 images since the images have up to three zeros prepended to their index in the corresponding filename.

The fix is simple: in the read_images_in_dir() function in utils/metrics.py, one must replace

fnames.sort()

with

remove_non_numeric = lambda x: ''.join(filter(str.isdigit, x))
fnames = sorted(fnames, key=lambda x: int(remove_non_numeric(x)))
@tristanengst tristanengst changed the title Bug in model evaluation—datasets with more than 1000 images have very low PSNRs Bug in model evaluation—datasets with more than 1000 images have very low PSNRs. Easy fix. May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant