Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rgb info in the generator is not collected in the correct way? #18

Closed
XavierXiao opened this issue Jun 26, 2023 · 8 comments
Closed

rgb info in the generator is not collected in the correct way? #18

XavierXiao opened this issue Jun 26, 2023 · 8 comments

Comments

@XavierXiao
Copy link

Hi! Thanks for the implementation, it is great! One possible issue I noticed is that the rgb images in each generator block is not collected correctly. If I understand correctly, in this line, the rgbs should collect rgb rather than layer_rgb?

lucidrains added a commit that referenced this issue Jun 26, 2023
@lucidrains
Copy link
Owner

@XavierXiao oh yes 🤦‍♂️ thank you!

@XavierXiao
Copy link
Author

Great! Thanks! BTW, I see that you upload some codes for upsampler, however it looks like it is a simple Unet architecture with ResNet blocks. Do you plan to build the unet_upsampler with gigaGAN blocks recently?

@lucidrains
Copy link
Owner

lucidrains commented Jun 27, 2023

@XavierXiao i'm kind of confused by what they used for upsampling

they have no architectural diagram, and only said it was a traditional unet, so that's what i'm going to start off with, paired with the GigaGAN discriminator. i was going to modify the unet to also output the rgb, like the style/gigagan generator

@lucidrains
Copy link
Owner

@XavierXiao open to pointers and suggestions, if you have any insights

@XavierXiao
Copy link
Author

Thanks! Yeah I agree the info in the paper is insufficient especially for the upsampler part. But my two cents on the potential architecture:

  1. Since in table A.2, the configuration of upsampler model has mapping network, w dimension, etc, I think definitely there is latent variable in the upsampler, similar to the base model. So the mapping network with concatenation of noise and global text embedding as input, as well as the adaptive filter with style modulation should be there. However, in the main text, it mentioned "residual blocks", so I guess it is more like a resnet block, but with conv layers replaced by adaptive kernel conv.
  2. Since for text-to-image upsampler, the multi-scale loss is also True, there should be multi-scale outputs (i.e., toRGB layers). It makes no sense to have toRGB layers on resolutions lower than the input, so I guess you can have toRGB layers on the last few Unet blocks where the resolution is larger or equal to the input resolution.

@lucidrains
Copy link
Owner

@XavierXiao yes what you say makes sense Xavier! will build it exactly how you said!

@lucidrains
Copy link
Owner

@XavierXiao b22ecfc let me know if we are on the same page after this commit

@XavierXiao
Copy link
Author

Great! Looks good to me! Only inconsistency to paper that I noticed is that the paper mentioned (in 3.4) the Unet is asymmetric (i.e., 3 down blocks and 6 up blocks for a 8x up-sampler), and the skip connection of the unet is, of course, only at matched resolutions of up and down blocks. Here your Unet seems to be symmetric. Not sure if it matters though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants