Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: unable to inpaint in webUI using RunwayML inpainting model #1596

Closed
1 task done
lstein opened this issue Nov 28, 2022 · 0 comments · Fixed by #2088
Closed
1 task done

[bug]: unable to inpaint in webUI using RunwayML inpainting model #1596

lstein opened this issue Nov 28, 2022 · 0 comments · Fixed by #2088
Assignees
Labels
bug Something isn't working

Comments

@lstein
Copy link
Collaborator

lstein commented Nov 28, 2022

Is there an existing issue for this?

  • I have searched the existing issues

OS

Linux

GPU

amd

VRAM

32GB

What happened?

After setting the model to inpaint-1.5, I loaded a 512x512 IAI-generated image into the new Unified Canvas panel, masked out a region of the image, entered a prompt and pressed "Invoke". The render got all the way to the end and then errored out. In the terminal, the following traceback appeared:

>> Ksampler using karras noise schedule (steps < 30)
Generating:   0%|                                                                                                                                                        | 0/1 [00:00<?, ?it/s]>> Sampling with k_heun starting at step 0 of 20 (20 new sampling steps)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  6.09it/s]
Generating:   0%|                                                                                                                                                        | 0/1 [00:03<?, ?it/s]
'Omnibus' object has no attribute 'pil_mask'


Traceback (most recent call last):
  File "/data/lstein/InvokeAI/backend/invoke_ai_web_server.py", line 1116, in generate_images
    self.generate.prompt2image(
  File "/data/lstein/InvokeAI/ldm/generate.py", line 482, in prompt2image
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 91, in generate
    image = make_image(x_T)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 136, in make_image
    return self.sample_to_image(samples)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 166, in sample_to_image
    if self.pil_image is None or self.pil_mask is None:
AttributeError: 'Omnibus' object has no attribute 'pil_mask'

On another occasion, a similar sequence of actions resulted in a different message:

>> Ksampler using karras noise schedule (steps < 30)
Generating:   0%|                                                                                                                                                        | 0/1 [00:00<?, ?it/s]>> Sampling with k_heun starting at step 0 of 20 (20 new sampling steps)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  6.09it/s]
Generating:   0%|                                                                                                                                                        | 0/1 [00:03<?, ?it/s]
operands could not be broadcast together with shapes (512,320) (704,512)


Traceback (most recent call last):
  File "/data/lstein/InvokeAI/backend/invoke_ai_web_server.py", line 1116, in generate_images
    self.generate.prompt2image(
  File "/data/lstein/InvokeAI/ldm/generate.py", line 482, in prompt2image
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 91, in generate
    image = make_image(x_T)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 136, in make_image
    return self.sample_to_image(samples)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 169, in sample_to_image
    corrected_result = super(Img2Img, self).repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 140, in repaste_and_color_correct
    mask_pixels = init_a_pixels * init_mask_pixels > 0
ValueError: operands could not be broadcast together with shapes (512,320) (704,512)

The first traceback is reproducible. I haven't been able to recreate the second.

Screenshots

No response

Additional context

No response

Contact Details

lincoln.stein@gmail.com

@lstein lstein added the bug Something isn't working label Nov 28, 2022
lstein added a commit that referenced this issue Nov 30, 2022
- error was "Omnibus object has no attribute pil_image"
- closes #1596
lstein added a commit that referenced this issue Nov 30, 2022
- error was "Omnibus object has no attribute pil_image"
- closes #1596
@lstein lstein closed this as completed in 0f4d71e Nov 30, 2022
lstein added a commit that referenced this issue Dec 20, 2022
When using the inpainting model, the following sequence of events
would cause a predictable crash:

1. Use unified canvas to outcrop a portion of the image.
2. Accept outcropped image and import into img2img
3. Try any img2img operation

This closes #1596.

The crash was:

```
operands could not be broadcast together with shapes (320,512) (512,576)

Traceback (most recent call last):
  File "/data/lstein/InvokeAI/backend/invoke_ai_web_server.py", line 1125, in generate_images
    self.generate.prompt2image(
  File "/data/lstein/InvokeAI/ldm/generate.py", line 492, in prompt2image
    results = generator.generate(
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 98, in generate
    image = make_image(x_T)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 138, in make_image
    return self.sample_to_image(samples)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 173, in sample_to_image
    corrected_result = super(Img2Img, self).repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 148, in repaste_and_color_correct
    mask_pixels = init_a_pixels * init_mask_pixels > 0
ValueError: operands could not be broadcast together with shapes (320,512) (512,576)
```

This error was caused by the image and its mask not being of identical
size due to the outcropping operation. The ultimate cause of this
error has something to do with different code paths being followed in
the `inpaint` vs the `omnibus` modules.

Since omnibus will be obsoleted by diffusers, I have chosen just to
work around the problem rather than track it down to its source. The
only ill effect is that color correction will not be applied to the
first image created by `img2img` after applying the outcrop and
immediately importing into the img2img canvas. Since the inpainting
model has less of a color drift problem than the standard model, this
is unlikely to be problematic.
lstein added a commit that referenced this issue Dec 22, 2022
When using the inpainting model, the following sequence of events
would cause a predictable crash:

1. Use unified canvas to outcrop a portion of the image.
2. Accept outcropped image and import into img2img
3. Try any img2img operation

This closes #1596.

The crash was:

```
operands could not be broadcast together with shapes (320,512) (512,576)

Traceback (most recent call last):
  File "/data/lstein/InvokeAI/backend/invoke_ai_web_server.py", line 1125, in generate_images
    self.generate.prompt2image(
  File "/data/lstein/InvokeAI/ldm/generate.py", line 492, in prompt2image
    results = generator.generate(
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 98, in generate
    image = make_image(x_T)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 138, in make_image
    return self.sample_to_image(samples)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 173, in sample_to_image
    corrected_result = super(Img2Img, self).repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 148, in repaste_and_color_correct
    mask_pixels = init_a_pixels * init_mask_pixels > 0
ValueError: operands could not be broadcast together with shapes (320,512) (512,576)
```

This error was caused by the image and its mask not being of identical
size due to the outcropping operation. The ultimate cause of this
error has something to do with different code paths being followed in
the `inpaint` vs the `omnibus` modules.

Since omnibus will be obsoleted by diffusers, I have chosen just to
work around the problem rather than track it down to its source. The
only ill effect is that color correction will not be applied to the
first image created by `img2img` after applying the outcrop and
immediately importing into the img2img canvas. Since the inpainting
model has less of a color drift problem than the standard model, this
is unlikely to be problematic.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants