Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StableDiffusionControlNetpaint destroying original image contrast and sharpness #8392

Closed
xalteropsx opened this issue Jun 3, 2024 · 11 comments

Comments

@xalteropsx
Copy link

why controlnet paint destroy the original state of color contrast

@asomoza
Copy link
Member

asomoza commented Jun 3, 2024

Hi, you're not giving information on what you're doing, the code you're using or even the result image. We can't help you if you don't provide some minimal reproducible example.

If I have to guess is that you're using the controlnet with too much strength, also the inpaint model does make the image a little less saturated depending on the denoise strength.

@xalteropsx
Copy link
Author

sorry i forget to provide reproduction give me a few minute

@xalteropsx
Copy link
Author

xalteropsx commented Jun 3, 2024

@asomoza

controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained('frankjoshua/dreamshaper_8Inpainting', controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True)

pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()

image = pipe(
    "corgi face with large ears, detailed, pixar, animated, disney",
    eta=1.0,
    image=batman,
    control_image=control_image,
    num_inference_steps=20,
    mask_image=mask,
).images[0]

@xalteropsx
Copy link
Author

xalteropsx commented Jun 3, 2024

disney
batman

@xalteropsx
Copy link
Author

xalteropsx commented Jun 3, 2024

image

test urself and see the result i think it will same with all model

what non mask has to do with the inpaiting area ? cannot we able to control it ?

@asomoza
Copy link
Member

asomoza commented Jun 3, 2024

The difference you see is mostly the vae decoding and encoding, this is a lossy process, no matter what you do you'll always lose some details.

Also you're using an inpainting model with an inpaint controlnet. You don't really need both as they do the same thing. If you use the controlnet you will have to pass the whole image as a context and get a new one back, so it will be always different.

if you want to preserve the original image as much as possible use an inpainting model without the controlnet and use padding_mask_crop which only changes the area of the mask.

@xalteropsx
Copy link
Author

xalteropsx commented Jun 3, 2024

@asomoza actually i have some inpainting model if i do use normal model with it show missmatch model size
will check padding_mask_crop >.< / brb doing some daily quest once done i will tell u result

@xalteropsx xalteropsx reopened this Jun 3, 2024
@xalteropsx
Copy link
Author

xalteropsx commented Jun 3, 2024

@asomoza sorry bro for tag u again it work like charm but i have something to ask in padding_mask_crop is it divided by mask or by whole image like what is 32 if we place it on padding
also it doesnt support multi image ?

  File "Z:\software\python11\Lib\site-packages\diffusers\pipelines\controlnet\pipeline_controlnet_inpaint.py", line 772, in check_inputs
    raise ValueError(
ValueError: The image should be a PIL image when inpainting mask crop, but is of type <class 'list'>.

@asomoza
Copy link
Member

asomoza commented Jun 3, 2024

padding_mask_crop is it divided by mask or by whole image like what is 32 if we place it on padding

Don't fully understand what you're trying to say, but when you enable the padding_mask_crop, the image gets cropped with the mask, upscaled, inpainted and then scaled down again to finally paste it over the same part of the image.

The padding just tells how much space you want between the mask and the original image.

also it doesnt support multi image?

yeah, I never use multi image with inpainting, and I wasn't here when it was implemented, but the logic probably is that is a task specific to each image so not that much need to make it multi image.

I'm curious on what you're doing that requires the same inpainting for multiple images.

@xalteropsx
Copy link
Author

xalteropsx commented Jun 3, 2024

seems like i got it ah u are correct same mask image on multiple image inpaint is not much needed but sometime it good to have

@xalteropsx
Copy link
Author

thanks alot bro >.</

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants