Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Generative Adverserial Loss Instead of L2 loss #1

Closed
azurespace opened this issue Nov 3, 2016 · 1 comment
Closed

Using Generative Adverserial Loss Instead of L2 loss #1

azurespace opened this issue Nov 3, 2016 · 1 comment

Comments

@azurespace
Copy link

Your work is very impressive to me. I think this work may be applied for font style copying among different languages.

But as you said in the front page, I've found some of images that your network generated are blurry. That is one of characteristic of L2(mse) loss.

So I think it would be worth to try GAN. I guess you already know but I just write a brief explanation about it. When you train a GAN, you use and train another network(called discriminator) simultaneously that predicts image whether it is original or generated one.

This is a tensorflow implementation of GAN for Superresolution implemented in Tensorflow.
https://github.com/buriburisuri/SRGAN

@kaonashi-tyc
Copy link
Owner

kaonashi-tyc commented Apr 10, 2017

@azurespace consider fixed https://github.com/kaonashi-tyc/zi2zi

Can we close this issue right now? :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants