You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for this contribution, the code is very easy to read and use.
My questions are:
The LS-GAN result of 44.1% mIoU was achieved by using a single-level module? if so what level was used, feature or output?
Is there any problem training a multi-level net with the LS-GAN loss target just by adding --gan LS as you did with the single-level case?
I see that lambda-adv-target2 was changed too in the process of using the LS-GAN mode. Does this mean there is some hyper-parameter search to be done in order to use the LS-GAN with a multi-level model training process?
Thanks again. =)
The text was updated successfully, but these errors were encountered:
Yes, we only use the single-level module for LS-GAN in the output level.
We have not tried to use multi-level for LS-GAN.
Yes, adding the multi-level module could improve the performance, but it will also need some hyperparameter tuning. If you intend to add it, I would suggest trying the default --lambda-seg 0.1 and --lambda-adv-target1 0.002 (proportional to the original setting) as the start to tune the hyperparameters.
Hi,
Thanks for this contribution, the code is very easy to read and use.
My questions are:
Thanks again. =)
The text was updated successfully, but these errors were encountered: