Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do we need to do all those conversions for inpaining? #13

Open
geekyayush opened this issue Feb 27, 2023 · 5 comments
Open

Do we need to do all those conversions for inpaining? #13

geekyayush opened this issue Feb 27, 2023 · 5 comments

Comments

@geekyayush
Copy link

geekyayush commented Feb 27, 2023

Hello
Do we need to do all those conversions mentioned under "ControlNet + Anything-v3" for inpainting?

Also, in the inpainting guide, there are these lines

# we have downloaded models locally, you can also load from huggingface
# control_sd15_seg is converted from control_sd15_seg.safetensors using instructions above
pipe_control = StableDiffusionControlNetInpaintPipeline.from_pretrained("./diffusers/control_sd15_seg",torch_dtype=torch.float16).to('cuda')
pipe_inpaint = StableDiffusionInpaintPipeline.from_pretrained("./diffusers/stable-diffusion-inpainting",torch_dtype=torch.float16).to('cuda')

Can anyone help understand what does the 2nd line means?
Also, for pipe_inpaint , do we pass the stable diffusion diffuser model path?

@haofanwang
Copy link
Owner

haofanwang commented Feb 27, 2023

Thanks for your interest! @geekyayush

  1. Yes, you should strictly follow our instructions.
  2. pipe_inpaint is a inpainting model based on stable diffusion, we use runwayml/stable-diffusion-inpainting. We cannot directly load a stable diffusion model such as runwayml/stable-diffusion-v1-5, although they are both based on stable-diffusion-1.5, their input channels are different.

@UglyStupidHonest
Copy link

Hey just following up here ! This might be a newbie misconception but if we replace the unet here do we not loose the custom model in this case the Anything v3 . Or is it really just replacing the inpainting channels ?

@geekyayush
Copy link
Author

Thanks @haofanwang !

I have another question regarding this.
If I want to use an SD + Dreambooth trained inpainting fine-tuned model, then, will it work this line?

pipe_control = StableDiffusionControlNetInpaintPipeline.from_pretrained("./diffusers/control_sd15_seg",torch_dtype=torch.float16).to('cuda')
pipe_inpaint = StableDiffusionInpaintPipeline.from_pretrained("./diffusers/my-dreambooh-inpaint-model",torch_dtype=torch.float16).to('cuda')

Here, for the pipe_control, I am using the same control_sd15_seg model and for pipe_inpaint , I am using my custom trained model.

Thanks!

@haofanwang
Copy link
Owner

Let me answer all you guys concerns here.

@UglyStupidHonest You are right, for now, if you want to equip ControlNet with inpainting ability, you have to replace the whole base model, which means that you cannot use anything-v3 here. I did try to only replace the input layer and keep all other layers in anything-v3, but it works bad.

@geekyayush If you inpainting model has the exact same layers as stable-diffusion-1.5, then it should work. You can just take ControlNet as a pluggable module that can insert into all stable-diffusion-1.5 based models.

@hyperia-elliot
Copy link

Is StableDiffusionControlNetInpaintPipeline currently operable? Trying the sample code in this repo with the provided input images, segmentation map, and specified models, gives the following result in my environment:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ │
│ /ingest/ImageDiffuserService/client/inpaint_proto.py:31 in │
│ │
│ 28 # the segmentation result is generated from https://huggingface.co/spaces/hysts/ControlN
│ 29 control_image = load_image('segmap.png') │
│ 30 │
│ ❱ 31 image = pipe_control(prompt="Face of a yellow cat, high resolution, sitting on a park be │
│ 32 │ │ │ │ │ negative_prompt="lowres, bad anatomy, worst quality, low quality", │
│ 33 │ │ │ │ │ controlnet_hint=control_image, │
│ 34 │ │ │ │ │ image=image, │
│ /home/devuser/anaconda3/envs/pytorch-env/lib/python3.8/site-packages/torch/autograd/grad_mode.py: │
│ 27 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ /home/devuser/anaconda3/envs/pytorch-env/lib/python3.8/site-packages/diffusers/pipelines/stable_d │
│ iffusion/pipeline_stable_diffusion_controlnet_inpaint.py:793 in call
│ │
│ 790 │ │ │ │ │
│ 791 │ │ │ │ if controlnet_hint is not None: │
│ 792 │ │ │ │ │ # ControlNet predict the noise residual │
│ ❱ 793 │ │ │ │ │ control = self.controlnet( │
│ 794 │ │ │ │ │ │ latent_model_input, t, encoder_hidden_states=prompt_embeds, cont │
│ 795 │ │ │ │ │ ) │
│ 796 │ │ │ │ │ control = [item for item in control] │
│ │
│ /home/devuser/anaconda3/envs/pytorch-env/lib/python3.8/site-packages/torch/nn/modules/module.py:1 │
│ 130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: forward() got an unexpected keyword argument 'controlnet_hint'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants