Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training style encoder #5

Open
aljox opened this issue Aug 21, 2020 · 0 comments
Open

Training style encoder #5

aljox opened this issue Aug 21, 2020 · 0 comments

Comments

@aljox
Copy link

aljox commented Aug 21, 2020

Shouldn't we update style encoder when we call g_train_step() with x_ref and not with z_trg?

 if z_trgs is not None:
      f_train_variable = self.mapping_network.trainable_variables
      e_train_variable = self.style_encoder.trainable_variables

      f_gradient = g_tape.gradient(g_loss, f_train_variable)
      e_gradient = g_tape.gradient(g_loss, e_train_variable)

      self.f_optimizer.apply_gradients(zip(f_gradient, f_train_variable))
      self.e_optimizer.apply_gradients(zip(e_gradient, e_train_variable))

Should be:

 if z_trgs is not None:
      f_train_variable = self.mapping_network.trainable_variables
      f_gradient = g_tape.gradient(g_loss, f_train_variable)
      self.f_optimizer.apply_gradients(zip(f_gradient, f_train_variable))
 else:
      e_train_variable = self.style_encoder.trainable_variables
      e_gradient = g_tape.gradient(g_loss, e_train_variable)
      self.e_optimizer.apply_gradients(zip(e_gradient, e_train_variable))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant