Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generator Evaluation Metric #17

Open
ulucsahin opened this issue Dec 15, 2019 · 0 comments
Open

Generator Evaluation Metric #17

ulucsahin opened this issue Dec 15, 2019 · 0 comments

Comments

@ulucsahin
Copy link

How would we go if we wanted to implement an evaluation metric for generator part?

I have tried to load a pre-trained discriminator addition to the discriminator that trained during training. And tested the generator with pre-trained discriminator at the end of each epoch. But I am not sure if this is a good way to measure the performance of generator.

Are there any feasible methods? I have done a little literature survey to see what are the methods of evaluating gans, but they are usually for datasets with certain classes. Since we do not have classes in T2F (or do we?) I have hard time implementing methods such as Inception Score , Frechet Inception Distance etc.

One method that I found is CrossLID (https://arxiv.org/abs/1905.00643). Which also has implementation on GitHub. However I did not try to implement it yet as I am unsure if it is suitable for this dataset-model.

@ulucsahin ulucsahin changed the title Evaluation Metric Generator Evaluation Metric Dec 15, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant