Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Periodic Activation Functions #10

Closed
betterze opened this issue Jul 6, 2021 · 5 comments
Closed

Periodic Activation Functions #10

betterze opened this issue Jul 6, 2021 · 5 comments

Comments

@betterze
Copy link

betterze commented Jul 6, 2021

Dear Ivan,

Thank you for your great work. I really like it.

Have you try to use Periodic Activation Functions from siren? You mention the fourier feature from siren in the paper.

Thank you for your help.

Best Wishes,

Alex

@universome
Copy link
Owner

universome commented Jul 8, 2021

Hi! We did use them in our original implementation. In the current one, we switched to the manually constructed fourier features instead of the static-random/predictable-from-latent-code ones (as suggested in the SIREN/FourierFeats papers) since 1) they are "guaranteed" to cover all reasonable directions and frequencies and 2) faster because they have much smaller dimensionality (for predictable fourier features, the situation is even worse since they are uncachable even at test time).

However, all the above ways to encode coordinate positions are supported in the current repo: one just needs to specify the desired embedding size for each type of the positional encoding: https://github.com/universome/inr-gan/blob/master/src/training/layers.py#L113-L117

@betterze
Copy link
Author

betterze commented Jul 8, 2021

Hi, Ivan,

Thank you for your detailed answer. I am new to this filed, correct me if I am wrong. Siren is not just about encoding coordinate positions in term of fourier feature, but also using activation function as 'sine'. Did you try to use 'sine' activation as mentioned in siren? I am just want to understand if 'sine' activation can work for large dataset (ffhq) for implicit neural representation with hypernetwork.

And a more general question, for implicit neural representation with hypernetwork (or continuous images), what is it advantage of standard generator (stylegan ada) except fast inference and super resolution?

I am really new to this field, your answer is very helpful for me. Thank you again for your answer.

Best Wishes,

Alex

@universome
Copy link
Owner

Ah, I see what you mean. You are asking whether we tried using sine activations everywhere throughout our INR decoder and not only for positional embeddings. We did try doing this in our preliminary experiments, but found that it does not perform well and also makes the whole pipeline more complicated (it becomes a one more thing you need to keep in mind when debugging things, and we already had enough things to keep in mind) + FourierFeats paper showed that you can have sine activations only for the first layer in INRs (i.e. to encode coordinates). We didn't provide any ablations on the activation function in the final paper, but there is an ablation on this in a parallel work by Anokhin et al (see Table 3) — it decreased the scores for them.

The main advantage of INR-based decoders is that they are supposed to understand the underlying image geometry much better. Check the gifs in the end of this old article here and a recent super cool INR-based GAN by Karras et al. here.

@betterze
Copy link
Author

Hi, Ivan,

Thank you for your detailed reply. The literatures you provided are very helpful for me.

Best Wishes,

Alex

@universome
Copy link
Owner

Feel free to ask if you'll have any further questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants