Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FID distance does not return value close to zero for the same distribution #277

Closed
gursimar opened this issue Nov 5, 2021 · 3 comments
Closed
Labels
bug Something isn't working

Comments

@gursimar
Copy link

gursimar commented Nov 5, 2021

Describe the bug
The piq.FID(x_feats, x_feats) does not return a value close to zero. Is it expected? I was wondering since its' a distance, if the two distributions are the same multivariate Gaussian, it should return a value close to zero.

To Reproduce
x_feats = torch.rand(10000, 2048)*5000
piq.FID()(x_feats, x_feats)

@gursimar gursimar added the bug Something isn't working label Nov 5, 2021
@zakajd
Copy link
Collaborator

zakajd commented Nov 6, 2021

Hi @gursimar
It works for me with smaller size of feature dimension.

import torch
import piq

x_feats = torch.rand(10000, 32)
y_feats = torch.rand(10000, 32)
piq.FID()(x_feats, y_feats)

>>> tensor(0.0043, dtype=torch.float64)

For higher dimensionality (like 2048 in your example), you'll need much larger number of distinct samples for distributions to be considered equal. Also when passing exactly same features as both x and y you may encounter singular matrix, so it's better to have them different.

@zakajd zakajd closed this as completed Nov 6, 2021
@gursimar
Copy link
Author

gursimar commented Nov 10, 2021

@zakajd, I think dimension might not be the cause. Try these code snippets.

`import torch
import piq

x_feats = torch.rand(10000, 32) * 5000
y_feats = torch.rand(10000, 32) * 5000
piq.FID()(x_feats, y_feats)

tensor(12293594.2559, dtype=torch.float64)`

Or try this

`import torch
import piq

x_feats = torch.rand(10000, 32) * 5000
piq.FID()(x_feats, x_feats)

tensor(242.7995, dtype=torch.float64)`

Why does the output change a lot (from your output 0.0043) if you shift the distribution?
In the second case, it's essentially the same features just scaled by a scalar.
is it inside the metric (maybe related to covariance) or the implementation has some bug?

@zakajd
Copy link
Collaborator

zakajd commented Nov 11, 2021

@gursimar Good point, thanks.
Did you try this with original TF implementation, does it behave differently?
Did you try it with smaller scaling values (say 10, 100)?

Our implementation is based on original code, but doesn't match it with 100% due to different covariant matrix computation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants