Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inpainting implementation #35

Closed
oobabooga opened this issue Aug 30, 2022 · 9 comments
Closed

Inpainting implementation #35

oobabooga opened this issue Aug 30, 2022 · 9 comments

Comments

@oobabooga
Copy link
Contributor

Hello,

Is the current inpainting implementation equivalent to this?

Sygil-Dev/sygil-webui#308

Thank you

@oobabooga
Copy link
Contributor Author

oobabooga commented Aug 31, 2022

The alternative implementation seems to be different and give better results; I have implemented it in PR #36 for your consideration.

@1blackbar
Copy link

1blackbar commented Aug 31, 2022

i tested both, its hard to tell, you might get lucky with resut and think wow it works better, i dont think any of those are using the proper inpainting model from here #20

Not sure how that model would change the results but its working good without it so far , k_euler_a gets best alignment with original image, other ones not so much

@oobabooga
Copy link
Contributor Author

oobabooga commented Aug 31, 2022

I think that you are be right, it might have been pure luck. I have also gotten great results with euler a and no blurring.

Maybe we should run a side by side test with the same settings to confirm that no qualitative difference exists?

@1blackbar
Copy link

1blackbar commented Aug 31, 2022

this one has also inpainting but its without proimpt which is incomplete i think , tried to remove eyes and it wont do it, i think SD inpainting wont work without prompt
https://github.com/shinomakoi/sd-dreamer

You mean same seed for inpainting , theres no way you can draw the same looking mask in both uis , i had great results with this repo inpainting but the masking blurring code is reversed, its applied to inpainted image insteead of original image to blur the edges and gives very harsh edges https://i.postimg.cc/3rfxrP2J/inp.jpg

@oobabooga
Copy link
Contributor Author

Even though perfectly drawing the same mask two times would be impossible, a rough comparison would still be valuable imo.
I would do it myself, but for some reason I can't get the hlky fork or the anon-hlhl fork to run anymore in optimized mode, so I can't do the test.

I agree that the edges are looking rough on the implementation here.

@oobabooga
Copy link
Contributor Author

I have managed to make a comparison by running the hlky repo on colab (https://github.com/daswer123/stable-diffusion-colab). It's not ideal since the masks are not exacly the same, but I think that it illustrates the differences.

The anon-hlhl implementation that was merged on the hlky webui seems to blend better the masked part the image with its surroundings, while the current implementation here often gives this "image inside an image" effect where the masked part loses connection with the surrounding context.

Seeds: 42, 43, 44, 45
Denoising strength: 1
Euler_a
20 steps
Remaining settings left as default

out

@oobabooga
Copy link
Contributor Author

@AUTOMATIC1111

@AUTOMATIC1111
Copy link
Owner

Used this mask:
xmask

Results for what I had originally:
grid-1605

Results for denoising via CFGDenoiser:
grid-1604

All with same seeds and everything.

Defaults are very different in both repos so comparing at defaults is as meaningful as you consider it.

AUTOMATIC1111 added a commit that referenced this issue Sep 1, 2022
support for --medvram
attempt to support share
orionaskatu referenced this issue in orionaskatu/stable-diffusion-webui Sep 1, 2022
@oobabooga
Copy link
Contributor Author

Thank you so much @AUTOMATIC1111! Your rigorous test settles it: it was probably all just cherry picking and superstition on my end.

It seemed reasonable to assume that doing the masking procedure for each iteration of the denoising process would give better results, but that doesn't seem to be the case.

The new CFGDenoiser implementation allows me to keep experimenting with this. I will do that and let you know if I find something that improves inpainting quality.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants