You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that you use the kld = torch.mean(-z_log_stddev + 0.5 * (torch.exp(2 * z_log_stddev) + torch.pow(z_mean, 2) - 1)) in UPDATE GENERATOR . But I don't understand that why you chose this as a part of your loss function. And it seems that it was not mentioned in the original paper. Could you please tell me its intention here?
And apart from this, the z_log_stddev and z_mean are just got from two different Linear+LeakyReLU layer. Emmm... why did you use the Linear+LeakyReLU layer rather than calculate mean and std directly?
Thanks for your help~
The text was updated successfully, but these errors were encountered:
@MapleSpirit It is actually mentioned in the original paper. The authors used the term 'text embedding augmentation', which is same as the conditioning augmentation in StackGAN. Also, I personally got a part of the original implementation from the authors. Therefore, I have no opinion about this specific implementation choice.
I noticed that you use the
kld = torch.mean(-z_log_stddev + 0.5 * (torch.exp(2 * z_log_stddev) + torch.pow(z_mean, 2) - 1))
inUPDATE GENERATOR
. But I don't understand that why you chose this as a part of your loss function. And it seems that it was not mentioned in the original paper. Could you please tell me its intention here?And apart from this, the
z_log_stddev
andz_mean
are just got from two differentLinear+LeakyReLU
layer. Emmm... why did you use theLinear+LeakyReLU
layer rather than calculate mean and std directly?Thanks for your help~
The text was updated successfully, but these errors were encountered: