Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precision / recall reproducibility #58

Open
sangyun884 opened this issue Mar 12, 2024 · 0 comments
Open

Precision / recall reproducibility #58

sangyun884 opened this issue Mar 12, 2024 · 0 comments

Comments

@sangyun884
Copy link

Was the reproducibility of these new metrics checked with the official repo's results? I tested with my mode-collapsed generative model that produces nearly the same images as below:

SCR-20240311-twrd

and got these results:

inception_score_mean: 1.12824
inception_score_std: 0.0006825597
kernel_inception_distance_mean: 0.239309
kernel_inception_distance_std: 0.002847237
precision: 4e-05
recall: 0.99084

f_score: 7.999677e-05

We can see that precision is nearly zero and recall is close to 1. Recall is supposed to measure the diversity of generated samples; it should be close to zero in this case. Also, it seems that the car lies on the true data manifold, meaning that precision should be close to one. Results seem to be flipped.

I used 50000 generated samples. This is the command I used:

fidelity --prc --isc --kid --input1 ${dir}/${iteration}-50k/samples --input2 cifar10-train --gpu 0 | tee ${dir}/${iteration}-50k/fidelity.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant