-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image inpainting #147
Comments
Did some further research:
open to thoughts/ ways of doing this . |
hi @anton-l, just wanted to circle back to this. I'm not sure how i could concat the 2 images and pass that + output through the diffusion model. Curious if you might have any ideas for how to approach this? |
Hi @krrishdholakia! By setting |
@anton-l How would you calculate loss at the interim stages for this? since you want it to generate a target image different (i.e. person wearing the clothing) from the concatenated images (clothing item + source person image)
|
hey @anton-l just wanted to follow up on this |
@krrishdholakia the idea would be to feed the concatenated clothing + person images (6 channels), and have 6 channels as output as well (since the number of channels needs to match to compute the residuals). Then the first (or last) 3 channels of the output would be your predicted clothed person, and the other 3 channels can be discarded (not used for the loss calculation). This is similar to how super-resolution is done with diffusion models. |
Hey @krrishdholakia not quite what you're looking for, but we now have an in-painting example with stable diffusion here https://github.com/huggingface/diffusers/tree/main/examples/inference#in-painting-using-stable-diffusion |
* Add SharkDownloader for user * Change tank_url to gs://shark_tank
Hi,
2 quick questions around this:
The text was updated successfully, but these errors were encountered: