-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StableDiffusionControlNetpaint destroying original image contrast and sharpness #8392
Comments
Hi, you're not giving information on what you're doing, the code you're using or even the result image. We can't help you if you don't provide some minimal reproducible example. If I have to guess is that you're using the controlnet with too much strength, also the inpaint model does make the image a little less saturated depending on the denoise strength. |
sorry i forget to provide reproduction give me a few minute |
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained('frankjoshua/dreamshaper_8Inpainting', controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
image = pipe(
"corgi face with large ears, detailed, pixar, animated, disney",
eta=1.0,
image=batman,
control_image=control_image,
num_inference_steps=20,
mask_image=mask,
).images[0] |
The difference you see is mostly the vae decoding and encoding, this is a lossy process, no matter what you do you'll always lose some details. Also you're using an inpainting model with an inpaint controlnet. You don't really need both as they do the same thing. If you use the controlnet you will have to pass the whole image as a context and get a new one back, so it will be always different. if you want to preserve the original image as much as possible use an inpainting model without the controlnet and use padding_mask_crop which only changes the area of the mask. |
@asomoza actually i have some inpainting model if i do use normal model with it show missmatch model size |
@asomoza sorry bro for tag u again it work like charm but i have something to ask in padding_mask_crop is it divided by mask or by whole image like what is 32 if we place it on padding File "Z:\software\python11\Lib\site-packages\diffusers\pipelines\controlnet\pipeline_controlnet_inpaint.py", line 772, in check_inputs
raise ValueError(
ValueError: The image should be a PIL image when inpainting mask crop, but is of type <class 'list'>. |
Don't fully understand what you're trying to say, but when you enable the The padding just tells how much space you want between the mask and the original image.
yeah, I never use multi image with inpainting, and I wasn't here when it was implemented, but the logic probably is that is a task specific to each image so not that much need to make it multi image. I'm curious on what you're doing that requires the same inpainting for multiple images. |
seems like i got it ah u are correct same mask image on multiple image inpaint is not much needed but sometime it good to have |
thanks alot bro >.</ |
why controlnet paint destroy the original state of color contrast
The text was updated successfully, but these errors were encountered: