Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LS-GAN Target and Multi-Level training #88

Open
Nadavc220 opened this issue Oct 14, 2020 · 2 comments
Open

LS-GAN Target and Multi-Level training #88

Nadavc220 opened this issue Oct 14, 2020 · 2 comments

Comments

@Nadavc220
Copy link

Hi,
Thanks for this contribution, the code is very easy to read and use.

My questions are:

  1. The LS-GAN result of 44.1% mIoU was achieved by using a single-level module? if so what level was used, feature or output?
  2. Is there any problem training a multi-level net with the LS-GAN loss target just by adding --gan LS as you did with the single-level case?
  3. I see that lambda-adv-target2 was changed too in the process of using the LS-GAN mode. Does this mean there is some hyper-parameter search to be done in order to use the LS-GAN with a multi-level model training process?

Thanks again. =)

@wasidennis
Copy link
Owner

Thanks @Nadavc220 for the nice feedback.

  1. Yes, we only use the single-level module for LS-GAN in the output level.
  2. We have not tried to use multi-level for LS-GAN.
  3. Yes, adding the multi-level module could improve the performance, but it will also need some hyperparameter tuning. If you intend to add it, I would suggest trying the default --lambda-seg 0.1 and --lambda-adv-target1 0.002 (proportional to the original setting) as the start to tune the hyperparameters.

@Nadavc220
Copy link
Author

Thanks for the quick response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants