We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,有一个问题困扰我。在GAN的训练中,参考https://github.com/rosinality/stylegan2-pytorch/blob/master/train.py 在训练Discriminator时,rosinality将Generator的梯度更新关闭:
requires_grad(generator, False) requires_grad(discriminator, True)
同样,训练Generator时,也会将Discriminator的梯度更新关闭:
requires_grad(generator, True) requires_grad(discriminator, False)
我只在您的代码中找到了对Discriminator进行梯度控制,没有对Generator的梯度调节:
for p in self.net_d.parameters(): p.requires_grad = False
&
for p in self.net_d.parameters(): p.requires_grad = True
1、这是不是意味着Generator始终会得到梯度更新(哪怕是在训练Discriminator时)?如果是这样,是否等价于每份数据都会在Generator前降传播两次呢? 2、如果Generator的梯度更新也会受到调节,请问这是在哪个位置实现的呢?
The text was updated successfully, but these errors were encountered:
当我们在更新 D的时候, G的输出进行了detach操作,所以是没有梯度传递到G的
Sorry, something went wrong.
我明白了,十分感谢您的解惑!
No branches or pull requests
您好,有一个问题困扰我。在GAN的训练中,参考https://github.com/rosinality/stylegan2-pytorch/blob/master/train.py
在训练Discriminator时,rosinality将Generator的梯度更新关闭:
同样,训练Generator时,也会将Discriminator的梯度更新关闭:
我只在您的代码中找到了对Discriminator进行梯度控制,没有对Generator的梯度调节:
&
1、这是不是意味着Generator始终会得到梯度更新(哪怕是在训练Discriminator时)?如果是这样,是否等价于每份数据都会在Generator前降传播两次呢?
2、如果Generator的梯度更新也会受到调节,请问这是在哪个位置实现的呢?
The text was updated successfully, but these errors were encountered: