Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train EDSR_baseline_x4 with WGAN-GP? #27

Closed
gentleboy opened this issue Apr 24, 2018 · 11 comments
Closed

How to train EDSR_baseline_x4 with WGAN-GP? #27

gentleboy opened this issue Apr 24, 2018 · 11 comments

Comments

@gentleboy
Copy link

Hi, I'm trying to train EDSR_baseline_x4 with WGAN-GP, but I don't know how to do it. I want to ask the following questions:

  1. In the discriminator, should batch normalization be removed? (I see that batch normalization has not been removed in your code )

  2. How to set (beta1, beta2, learning rate) of Adam for optimizing discriminator and generator?

  3. How to set the k value for adversarial loss? (I see that the default value of gan_k is 1 in your code )

  4. How to set the weights of VGG54 and generator loss?

Can you give me some advice?

Thank you!

@sanghyun-son
Copy link
Owner

Hello.

I think answers below can help you.

  1. I think batch-normalization 'can' be removed, but that is not a mandatory.

  2. These lines control the hyperparameters of the generator (SR network) optimizer. You can change them with input arguments. (ex. python main.py --lr 5e-5) You can modify the discriminator optimizer's hyperparameters by fixing these lines, if you are using WGAP-GP configuration. They are hard-coded. In case of using GAN and WGAN loss, both generator and discriminator share their optimizer's hyperparameters.

  3. Use --gan_k [n] argument to modify it.

  4. You can refer to this line, to check how you set the loss function.

Although I implemented WGAN loss and checked it working, there are several things to consider.

  1. In the original WGAN-GP paper, authors set --gan_k to 5, but it takes so much time as the output patch_size is 192x192 or 96x96 in my default setting.

  2. If gan_k is larger than 1, you require multiple output batches to update the discriminator several times. This is not a big problem in the traditional generative model(like DCGAN) because they generate images from multi-dimensional uniform or normal distributions, which can be generated any time you want. However, super-resolution network have to take low-resolution patches as input, and they should be sampled from dataset. Until now, the adversarial loss function class do not access to the dataset, and I use a single batch to update the discriminator gan_k times. I am not sure on this approach.

  3. Code can be executed, but it does not seem to converge.

For these reasons, I recommend you to use naive GAN as SRGAN did. You can do this by running this line.

Thank you!

@gentleboy
Copy link
Author

Thank you for detail and clear answers.

In addition, I would also like to ask if MDSR_baseline can be trained with naive GAN? If the answer is yes, how should I do it?

Thank you!

@sanghyun-son
Copy link
Owner

You can train MDSR_baseline with adversarial loss.

However, there is one thing to be changed from the code.

Currently, MDSR is designed to take 48x48 input patches for all scales, and returns 96x96, 144x144, and 192x192 output patches.

Because GAN discriminator should take 96x96 patches as its input, it is better to change the input patch-size for different scales (ex. 48x48 for scale 2, 32x32 for scale 3, and 24x24 for scale 4.)

Another approach is to use global-average pooling at the end of the discriminator to make scale-independent model.

If you are not in hurry, I will test and upload the script.

Thank you!

@gentleboy
Copy link
Author

Thank you for your reply. I am looking forward to your test results.

In addition,for EDSR_baseline_x4, I'm curious whether using WGAN-GP will get better performance than naive GAN? I don't have enough ability to implement WGAN-GP correctly now,can you take some time to test WGAN-GP?

Thank you very much!

@sanghyun-son
Copy link
Owner

Hello.

I tested MDSR-GAN and got satisfying results.

You have to change some codes for this experiment.

These two lines should be replaced to

tp = patch_size

if you want to train MDSR-GAN.

Also, I used a script below:

python main.py --template GAN --model MDSR --scale 2+3+4 --save MDSR_GAN --reset --patch_size 96 --loss 5*VGG54+0.15*GAN --pre_train ../experiment/model/MDSR_baseline.pt --ext bin --save_results --data_test Set14

Also, my WGAN-GP implementation is valid.

However, I do not think that it is appropriate to directly use WGAN-GP formulation to super-resolution.

If I found nice approach, I will let you know.

Thank you!

@gentleboy
Copy link
Author

Can you send me a copy of the trained MDSR-GAN model? I want to see its super-resolution results.

My email address is: 972740042@qq.com

Thank you!

@sanghyun-son
Copy link
Owner

You can download it from here.

I think x4 output is not that satisfying with my default hyper-parameters.

Maybe they can be better if you use smaller weight to adversarial loss (--loss 5*VGG54+0.1*GAN seems to be appropriate.), or changing other parameters.

Thank you!

@gentleboy
Copy link
Author

Thank you!

@Jasas9754
Copy link

https://github.com/JustinhoCHN/SRGAN_Wasserstein

이후에 이런 레포가 생겼네요. 참고할 수 있을거 같습니다.

@Jasas9754
Copy link

And what about wgan-hinge https://arxiv.org/abs/1803.01541

Is it a waste of time? I'm curious about the result.

@sanghyun-son
Copy link
Owner

I think every experiment is worth trying.

However, I do not have enough time for implement advanced WGANs, so it will very nice if someone make a pull request.

Thank you for letting me know those ideas!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants