You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @rdevon, I have a question about the updating of the weights of the discriminator in prior matching. Updating weights of the discriminator to maximize the prior loss from the paper is done here, and Z_Q is detached so the weights of the encoder will not be updated.
But when you want to update the weights of the encoder to minimize the loss from the paper you that here and Q_samples are calculated using the discriminator in self.score function. So, I can't see how the weights of the discriminator do not minimize the loss from the paper in this case (which would be wrong, since the discriminator wants to maximize the loss)?
The text was updated successfully, but these errors were encountered:
So it's not entirely opaque because of how cortex works (apologies as this was a framework I worked on a while ago and which I couldn't get enough help to support on). When the model "adds losses" as it does at the end of that routine function, those losses only apply to the parameters of the model specified in the keys used in that function. So when I say "self.add_losses(encoder=some_loss)", if some_loss depends on the parameters of some other network / model, those parameters wont change according to some_loss, unless I also say "discriminator=some_loss".
Hi @rdevon, I have a question about the updating of the weights of the discriminator in prior matching. Updating weights of the discriminator to maximize the prior loss from the paper is done here, and Z_Q is detached so the weights of the encoder will not be updated.
But when you want to update the weights of the encoder to minimize the loss from the paper you that here and Q_samples are calculated using the discriminator in self.score function. So, I can't see how the weights of the discriminator do not minimize the loss from the paper in this case (which would be wrong, since the discriminator wants to maximize the loss)?
The text was updated successfully, but these errors were encountered: