Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gan training #22

Closed
gongshichina opened this issue Nov 21, 2021 · 1 comment
Closed

gan training #22

gongshichina opened this issue Nov 21, 2021 · 1 comment

Comments

@gongshichina
Copy link

https://github.com/hbutsuak95/monolayout/blob/5339d5f7e8f7fbc8272bd96abba16d6128b42098/train.py#L252

In the classical GAN training, we avoid letting the loss_D to influence the weight updating of G networks(because loss_D is the reverse training objective of G). However,
here the loss_D will accumulate gradients of G network, and update the weights of G networks by the following self.model_optimizer.step().
Could you please comment on it? Thanks!

@manila95
Copy link
Owner

manila95 commented May 5, 2022

Here we are using the discriminator as a regularization technique, unlike GAN where you have a generator and a discriminator, the task of the discriminator there is to just distinguish b/w real and fake images and the generator constantly updates itself indirectly to fool the discriminator. Here we have an AutoEncoder which outputs a layout and the discriminator makes sure that this layout looks similar to a real layout of an autonomous driving scene.

@manila95 manila95 closed this as completed May 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants