Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two step adversarial loss #1

Open
Dannynis opened this issue Aug 3, 2020 · 8 comments
Open

Two step adversarial loss #1

Dannynis opened this issue Aug 3, 2020 · 8 comments

Comments

@Dannynis
Copy link

Dannynis commented Aug 3, 2020

Hey, am I missing something or the second step adverserial loss is missing in this implementation?

@keishatsai
Copy link

Hi, I am also wondering that. I think the implementation of adversarial loss is indeed missing.

@Jeffery-zhang-nfls
Copy link
Collaborator

yes, it indeed missed the second step adverserial loss.
I add the second step adverserial loss in the branch "2nd-step-adverserial-loss"

@keishatsai
Copy link

@Jeffery-zhang-nfls Thank you for the contribution! Will look at this later.

@Dannynis
Copy link
Author

Perhaps i misunderstood the paper but doesnt the discriminator suppose to updated as well in the second atep?

@keishatsai
Copy link

Hi @Dannynis and @Jeffery-zhang-nfls,

I am also wondering that...I thought this second step is only happening for updating the discriminator loss?
Or am i misunderstanding as well?

@keishatsai
Copy link

keishatsai commented Sep 26, 2020

Hi all,
I thought the 2-step loss is only happening in discriminator based on paper, but the discriminator became too strong based on my experiments.
Anyway, just for anyone who want to try 2-step loss....The followings are for your reference.

Here I only put one side of code for depicting clearly. You can finish another side on your own, and remember to update final generator and discriminator loss, too.

Generator 2-step loss part 1

d_cycle_A = self.discriminator_A(cycle_A)

Generator 2-step loss part 2

generator_2step_BAB = torch.mean((1 - d_cycle_B) ** 2)

Discriminator 2-step loss part 1

two_step_A = self.generator_B2A(self.generator_A2B(real_A)) d_two_step_A = self.discriminator_A(two_step_A)

Discriminator 2-step loss part 2

d_loss_A_two_step = (torch.mean((d_real_A - 1)**2) + torch.mean(d_two_step_A**2))

@saurabh-kataria
Copy link

I think we are supposed to introduce two more networks D_X^' and D_Y^'. Paper clearly says in Sec 3.1 "we introduce an additional discriminator". The code currently re-uses main discriminators.

@jackaduma
Copy link
Owner

i'd fixed the problem in master branch, thanks all of you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants