Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the FVD evaluation #7

Closed
hsi1032 opened this issue Oct 1, 2021 · 6 comments
Closed

Question about the FVD evaluation #7

hsi1032 opened this issue Oct 1, 2021 · 6 comments

Comments

@hsi1032
Copy link

hsi1032 commented Oct 1, 2021

Hi,

First of all, thank you for your great work!

As I read your paper,
I understand that the FVD is calculated from 2048 videos with 128x128 resolution in UCF101 dataset.

To evaluate your model on UCF101, I randomly sampled the 2048 real videos (random video clips with 16 consecutive frames) and resize them into 128x128 resolution.
Then, I calculated FVD between sampled real and fake videos.

In result, I got 625.87 which is a little lower than the distance you reported.
I think there is some difference when building the real video samples compared to your implementation or there is a lot of oscillation of FVD as the randomness of sampling.

Can you inform me detailed evaluation process for FVD on UCF101 and faceforensics dataset?

Thanks,

@alanspike
Copy link
Collaborator

Hi,

Thanks for your interests. We used the FVD code from here.

Also, we made some changes for the released MoCoGAN-HD repo, such as changing DataParallel to DistributedDataParallel, and used the repo to train on different datasets to get the released checkpoints, so there might be some differences for the metrics.

@hsi1032
Copy link
Author

hsi1032 commented Oct 2, 2021

Thanks for your prompt response!

That means the evaluation process I did is same as your implementation?
(FVD between randomly sampled 2048 real videos and fake videos)

@bluer555
Copy link

bluer555 commented Oct 2, 2021

Yes, we did it in the same way :)

@hsi1032
Copy link
Author

hsi1032 commented Oct 2, 2021

Now I understand the detailed evaluation process!

Thanks again for kind replying!

@hsi1032 hsi1032 closed this as completed Oct 2, 2021
@songweige
Copy link

songweige commented Oct 27, 2021

Hi, thank you for maintaining the codebase and replying to the issues timely!

I have a follow-up question based on the comments of @bluer555 and @hsi1032 - did you calculate the distance between the 2048 i3d feature vectors of real and fake batch, or calculate the distance using 16 feature vectors and average over 2048/16=128 distances? If it is the former, how did get the standard deviation in the table? Thanks!

@bluer555
Copy link

Hi,

We get the distance by 2048 feature vectors and repeat this process 10 times to get the std.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants