You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your work is very impressive to me. I think this work may be applied for font style copying among different languages.
But as you said in the front page, I've found some of images that your network generated are blurry. That is one of characteristic of L2(mse) loss.
So I think it would be worth to try GAN. I guess you already know but I just write a brief explanation about it. When you train a GAN, you use and train another network(called discriminator) simultaneously that predicts image whether it is original or generated one.
Your work is very impressive to me. I think this work may be applied for font style copying among different languages.
But as you said in the front page, I've found some of images that your network generated are blurry. That is one of characteristic of L2(mse) loss.
So I think it would be worth to try GAN. I guess you already know but I just write a brief explanation about it. When you train a GAN, you use and train another network(called discriminator) simultaneously that predicts image whether it is original or generated one.
This is a tensorflow implementation of GAN for Superresolution implemented in Tensorflow.
https://github.com/buriburisuri/SRGAN
The text was updated successfully, but these errors were encountered: