-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to calculate the loss #60
Comments
This paper is about adding additional condition to an existing text-conditioned generative model. In the paper, there are many tasks mentioned (e.g. controlling generation with canny-edge, controlling generation with hough-line, controlling generation with user scribble, etc). For canny-edge input, c_f is the canny-edge image obtained from the ground truth image. |
so it c_f is the ground truth (canny-edge for that task) and your model learn how to make prediction with that input how you can use that model for another/test image ? |
During training, there is a set of such inputs and targets ((canny_edge_1, target_image_1), (canny_edge_2, target_image_2), ... (canny_edge_N, target_image_N)), and we minimize the loss on this dataset. The ultimate goal of all neural net training is for it to generalize (meaning also work reasonably well on other test images that it may not have seen during training). Hope this helps. |
thanks for the answer but it's still not fully clear for me for canny edge missing, during training the input to the model is [(canny_edge_1, target_image_1,text prompt)...()] now target_image + text area are used as "hint" to the backward process if that is true , that also mean that the function the learn how to reverse the noise should NOT get the canny_edge_1 image but the target_image which is different from the formula in the paper i really get confuse and need some help to figure it out |
The diffusion process do NOT add noise to the canny edge input, it adds noise to the target image. Just like how the diffusion process also do NOT add noise to the text input. When comparing to standard SD, there is no change in how the forward and reverse diffusion process works. What is changed is that new inputs are created (canny edge input) to influence the denoising process via the SD Unet. The time variable, text conditioning input, and canny edge input is input to the Controlnet which is used to control/modify the SD Unet behavior. |
To make this easier, just imagine we replace the Unet in SD with a Unet that can take in additional input (canny edge input). |
Consider this image, to go from x_t to x_t-1, in normal SD, a Unet is used, and the time variable t, text conditioning data from CLIP text encoder is inputted to this Unet. Now, with ControlNet, this Unet is modified, and this time in addition to the time variable t and text conditiioning data from CLIP text encoder, it also takes in canny edge map data. |
i didn't fully understand how the loss is calculated
regular diffusion models takes input image adding noise and during backward stage we learn how to undo that noise
the loss calculate base on how good we learn that noise for every stage in T
so what i don't understand is what the part of the target image ? ( we add the noise to input image )
that is what says in the paper
I also not sure what is it "task-specific conditions cf" in the loss , is it the target image ?
thanks
The text was updated successfully, but these errors were encountered: