You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the classical GAN training, we avoid letting the loss_D to influence the weight updating of G networks(because loss_D is the reverse training objective of G). However,
here the loss_D will accumulate gradients of G network, and update the weights of G networks by the following self.model_optimizer.step().
Could you please comment on it? Thanks!
The text was updated successfully, but these errors were encountered:
Here we are using the discriminator as a regularization technique, unlike GAN where you have a generator and a discriminator, the task of the discriminator there is to just distinguish b/w real and fake images and the generator constantly updates itself indirectly to fool the discriminator. Here we have an AutoEncoder which outputs a layout and the discriminator makes sure that this layout looks similar to a real layout of an autonomous driving scene.
https://github.com/hbutsuak95/monolayout/blob/5339d5f7e8f7fbc8272bd96abba16d6128b42098/train.py#L252
In the classical GAN training, we avoid letting the loss_D to influence the weight updating of G networks(because loss_D is the reverse training objective of G). However,
here the loss_D will accumulate gradients of G network, and update the weights of G networks by the following
self.model_optimizer.step()
.Could you please comment on it? Thanks!
The text was updated successfully, but these errors were encountered: