Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

truncation trick #62

Closed
KK666-AI opened this issue Mar 26, 2021 · 11 comments
Closed

truncation trick #62

KK666-AI opened this issue Mar 26, 2021 · 11 comments

Comments

@KK666-AI
Copy link

Dear author,

I am reading your implementation on latent sampling from sample.py (function: sample_latents). For a gaussian sampling as implemented by latents = torch.randn(batch_size, dim, device=device)/truncated_factor.

I notice that the above implementation is not a standard truncation trick, which is defined by

The Truncation Trick is a latent sampling procedure for generative adversarial networks, where we sample from a truncated normal (where values which fall outside a range are resampled to fall inside that range)

@mingukkang
Copy link
Collaborator

Thank you very much.

I will reflect your reporting ASAP.

Best,

Minguk

@KK666-AI
Copy link
Author

Let me know if you have fixed it.

@mingukkang
Copy link
Collaborator

Hi,

Thank you so much.

I have corrected the wrong implementation of truncation trick.

Please refer to "src/utils/sample.py" for more details.

@KK666-AI
Copy link
Author

values = truncnorm.rvs(-threshold, threshold, size=size) is not defined as seen in line ?

mingukkang added a commit that referenced this issue Mar 31, 2021
@KK666-AI
Copy link
Author

also, i think using latents_eps = (1-perturb)*latents + perturb*sample_normal(batch_size, dim, -1.0, device) in line should be more reasonable, because such definition maintains a realistic distribution, whose cumulative distribution function is 1.

What do you think?

@mingukkang
Copy link
Collaborator

Hi,

Thank you for your suggestion.

I think that the method you suggested is reasonable.

However, I already conducted BigGAN-Mod/ContraGAN + ICR experiments using the "latents + pertub*sample_normal" way.

We have also decided to keep StudioGAN code in its current state in order to exactly implement the original ICR regularization.

Nice suggestion and thank you.

Best,

Minguk

@KK666-AI
Copy link
Author

Yes. I see. The experimental results on large-scale image datasets are very expensive.

@MTandHJ
Copy link

MTandHJ commented Apr 14, 2021

Hi,

It seems that the truncated_normal returns a ndarray rather than a tensor.

@KK666-AI
Copy link
Author

@MTandHJ I report the similar bug, as seen here

@mingukkang
Copy link
Collaborator

Thank you.

I have fixed the problem above.

Best,

Minguk

@KK666-AI
Copy link
Author

@mingukkang this function may want to return latents_eps ? see details in this line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants