Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differences between the paper and the code on the generation of fake image #21

Closed
donydchen opened this issue Aug 23, 2018 · 3 comments
Closed

Comments

@donydchen
Copy link

Hi Albert, thanks for sharing your code.

In the paper, the fake image is generated by using the equation
.

While in the code, I find that the fake image is generated in a different way, which is
fake_imgs_masked = fake_img_mask * self._real_img + (1 - fake_img_mask) * fake_imgs,
refers to https://github.com/albertpumarola/GANimation/blob/master/models/ganimation.py#L228 and https://github.com/albertpumarola/GANimation/blob/master/models/ganimation.py#L275, so it should be

Would you please kindly clarify this? Thanks in advance.

@albertpumarola
Copy link
Owner

albertpumarola commented Aug 23, 2018

Your are right, it is an error in the equation. I will update de arxiv. Thank you for pointing this out.

@ZHANG-SHI-CHANG
Copy link

Hi Albert, thanks for sharing your code.

In the paper, the fake image is generated by using the equation
.

While in the code, I find that the fake image is generated in a different way, which is
fake_imgs_masked = fake_img_mask * self._real_img + (1 - fake_img_mask) * fake_imgs,
refers to https://github.com/albertpumarola/GANimation/blob/master/models/ganimation.py#L228 and https://github.com/albertpumarola/GANimation/blob/master/models/ganimation.py#L275, so it should be

Would you please kindly clarify this? Thanks in advance.

兄弟,你复现实验的时候有没有遇到attention趋于1的问题,在我实验的过程中论文里的两个约束loss起不到作用,attention还是会趋于1

@albertpumarola
Copy link
Owner

albertpumarola commented Nov 9, 2018

If it is tending to 1, suposing that the dataset is correct, you can try augmenting the lambda of this constraint. Be careful tunning it, if you increase it too much A will tend to 0. When I was tunning it I was aiming for a lambda that constrained A around 0.5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants