-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Q: Is there any specifications on the image pixel values for the decoder? #33
Comments
@xiankgx yea, this repository does it from -1 to +1, but i've seen some generative models out there (back when i trained a lot of GANs) that does it from 0 to 1. i don't actually know if i've ever read any papers that did a proper comparison between the two |
We should be very careful when passing images to various places, for different CLIP implementations which expect different things, also as the prediction target or x_start for the decoder. |
@xiankgx yes! we definitely need to keep an eye on normalization |
All clip implementation I know of provide a preprocess function, we should use it to convert from image to tensor However, i wouldn't recommend to do any clip forward for training and instead to use precomputed clip embeddings The prior training takes as input clip text and clip image The only time we may want to do clip forward is at inference time |
@rom1504 yea, i think the issue is that the decoder will be trained images that are simply normalized to -1 to 1, but CLIP uses https://github.com/openai/CLIP/blob/main/clip/clip.py#L85 (but we can always just do this within the i think what will have to happen is that on CLIP image embedding forward, we unnormalize the image (back to 0 to 1) then run the CLIP normalization |
I don't understand. |
basically, images usually start off normalized from a range of 0 to 1 (shrunk from 0 to 255) for DDPMs, we normalize them to -1 to 1 using for CLIP, if we do all the embedding processing externally, then there is no problem - however, for the decoder, we currently take in the image and do both DDPM and derive the CLIP image embedding. so I just have to make sure to unnormalize the image before renormalizing it with what CLIP was trained on, before passing it into the attention net. you can double check my work here! https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.py#L228 |
ah I see, makes sense |
Hi, do you know if there are any strict restrictions on the image input to the decoder? I remember it is mentioned to be [-1, +1] somewhere, but can we also used other values like [0, 1]? However, since we are adding random normal noise with mean 0 I guess not?
The text was updated successfully, but these errors were encountered: