Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why minmize l1(\hat{x_0}, x_0)+l1(\hat{x_1}, x_0) when optimizing aux model? #14

Open
caisikai opened this issue Nov 8, 2022 · 0 comments

Comments

@caisikai
Copy link

caisikai commented Nov 8, 2022

Hi, keonlee.
Thanks for sharing code!
I found that when training aux model, we get \hat{x_0} from G, then diffuse it to \hat{x_1}, finally get a prediciton list [ \hat{x_0}, \hat{x_1}]. When calculating mel loss, add l1 loss of them with target. It confuse me. I understand l1(x_0, \hat{x_0}). But why not l1(x_1, \hat{x_1}).

@caisikai caisikai changed the title Which modules of G are freezed when training G in GAN? Why minmize l1(\hat{x_0}, x_0)+l1(\hat{x_1}, x_0) when optimizing aux model? Nov 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant