Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: get_crop_region_v2 error, when Only masked padding, pixels is set to 0 #15593

Closed
4 of 6 tasks
Bing-su opened this issue Apr 22, 2024 · 2 comments
Closed
4 of 6 tasks
Assignees
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@Bing-su
Copy link
Contributor

Bing-su commented Apr 22, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

When img2img inpainting, if you set inpaint area = Only Masked, Only masked padding, pixels = 0, you can see below TypeError

*** Error completing request
*** Arguments: ('task(76p88mz5fyym2hp)', <gradio.routes.Request object at 0x00000221AC63C2B0>, 2, 'masterpiece, best quality, 1girl, __woman_clothes__, __places__, large breasts, <lora:add_detail:0.5>', '(worst quality, low quality:1.1), text, title, logo, signature, (EasyNegativeV2:0.7), (negative_hand-neg:0.7)', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=512x768 at 0x221ACADF820>, 'mask': <PIL.Image.Image image mode=RGB size=512x768 at 0x221ACC862C0>}, None, None, None, None, 4, 0, 1, 1, 1, 7.5, 1.5, 0.75, 0.0, 768, 512, 1, 0, 1, 0, 0, '', '', '', [], False, [], '', 0, 20, 'DPM++ 2M', 'Automatic', False, 1, 0.5, 4, 0, 0.5, 2, -1, False, -1, 0, 0, 0, False, '', 0.8, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion-webui\modules\img2img.py", line 232, in img2img
        processed = process_images(p)
      File "D:\stable-diffusion-webui\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\stable-diffusion-webui\modules\processing.py", line 915, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "D:\stable-diffusion-webui\modules\processing.py", line 1621, in init
        crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height)
      File "D:\stable-diffusion-webui\modules\masking.py", line 45, in expand_crop_region
        ratio_crop_region = (x2 - x1) / (y2 - y1)
    TypeError: unsupported operand type(s) for -: 'tuple' and 'int'

Steps to reproduce the problem

  1. img2img inpainting
  2. set inpaint area = Only Masked, Only masked padding, pixels = 0,
  3. generate

What should have happened?

No TypeError...

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-04-22-13-42.json

Console logs

Python 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)]
Version: v1.9.2-113-g97fd7b24
Commit hash: 97fd7b2485f7c42757b622040ac880f66ff1dc43
Launching Web UI with arguments: --xformers --api
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.4.2, num models: 13
ControlNet preprocessor location: D:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-04-22 22:35:09,481 - ControlNet - INFO - ControlNet v1.1.443
2024-04-22 22:35:09,647 - ControlNet - INFO - ControlNet v1.1.443
Loading weights [5998292c04] from D:\stable-diffusion-webui\models\Stable-diffusion\Counterfeit-V3.0_fp16-no-ema.safetensors
2024-04-22 22:35:10,041 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 15.3s (prepare environment: 3.3s, import torch: 4.3s, import gradio: 1.5s, setup paths: 0.7s, initialize shared: 0.3s, other imports: 0.7s, load scripts: 2.1s, create ui: 0.8s, gradio launch: 0.4s, add APIs: 1.1s).
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\kl-f8-anime2.safetensors
Applying attention optimization: xformers... done.
Model loaded in 5.4s (load weights from disk: 0.9s, create model: 1.4s, apply weights to model: 2.6s, load VAE: 0.1s, calculate empty prompt: 0.2s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00,  3.01it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.11it/s]
0: 640x448 1 face, 111.6ms
Speed: 2.0ms preprocess, 111.6ms inference, 38.5ms postprocess per image at shape (1, 3, 640, 448)
modules\processing.py:1618 StableDiffusionProcessingImg2Img.init
    crop_region: (
        161,
        67,
        403,
        309,
    ) (tuple) len=4
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00,  6.55it/s]

0: 640x448 1 face, 10.6ms
Speed: 4.0ms preprocess, 10.6ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 448)
modules\processing.py:1618 StableDiffusionProcessingImg2Img.init
    crop_region: (
        154,
        47,
        404,
        297,
    ) (tuple) len=4
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00,  7.18it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:11<00:00,  1.78it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.06it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.06it/s]
0: 640x448 3 faces, 110.6ms
Speed: 2.1ms preprocess, 110.6ms inference, 4.4ms postprocess per image at shape (1, 3, 640, 448)
modules\processing.py:1618 StableDiffusionProcessingImg2Img.init
    crop_region: (
        197,
        78,
        349,
        (
            197,
            78,
            349,
            234,
        ),
    ) (tuple) len=4
*** Error running postprocess_image: D:\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 897, in postprocess_image
        script.postprocess_image(p, pp, *script_args)
      File "D:\stable-diffusion-webui\extensions\adetailer\adetailer\traceback.py", line 159, in wrapper
        raise error from None
    TypeError:
    ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
    │                                                   System info                                                    │
    │ ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
    │ ┃             ┃ Value                                                                                          ┃ │
    │ ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
    │ │    Platform │ Windows-10-10.0.22631-SP0                                                                      │ │
    │ │      Python │ 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)]  │ │
    │ │     Version │ v1.9.2-113-g97fd7b24                                                                           │ │
    │ │      Commit │ 97fd7b2485f7c42757b622040ac880f66ff1dc43                                                       │ │
    │ │ Commandline │ ['launch.py', '--xformers', '--api']                                                           │ │
    │ │   Libraries │ {'torch': '2.2.2+cu121', 'torchvision': '0.17.2+cu121', 'ultralytics': '8.2.2', 'mediapipe':   │ │
    │ │             │ '0.10.11'}                                                                                     │ │
    │ └─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────┘ │
    │                                                      Inputs                                                      │
    │ ┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
    │ ┃                 ┃ Value                                                                                      ┃ │
    │ ┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
    │ │          prompt │ masterpiece, best quality, 1girl, __woman_clothes__, __places__, large breasts,            │ │
    │ │                 │ <lora:add_detail:0.5>                                                                      │ │
    │ │ negative_prompt │ (worst quality, low quality:1.1), text, title, logo, signature, (EasyNegativeV2:0.7),      │ │
    │ │                 │ (negative_hand-neg:0.7)                                                                    │ │
    │ │          n_iter │ 1                                                                                          │ │
    │ │      batch_size │ 2                                                                                          │ │
    │ │           width │ 512                                                                                        │ │
    │ │          height │ 768                                                                                        │ │
    │ │    sampler_name │ DPM++ 2M                                                                                   │ │
    │ │       enable_hr │ False                                                                                      │ │
    │ │     hr_upscaler │ Latent                                                                                     │ │
    │ │      checkpoint │ Counterfeit-V3.0_fp16-no-ema.safetensors [5998292c04]                                      │ │
    │ │             vae │ kl-f8-anime2.safetensors                                                                   │ │
    │ │            unet │ Automatic                                                                                  │ │
    │ └─────────────────┴────────────────────────────────────────────────────────────────────────────────────────────┘ │
    │                 ADetailer                                                                                        │
    │ ┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓                                                                        │
    │ ┃                     ┃ Value           ┃                                                                        │
    │ ┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩                                                                        │
    │ │             version │ 24.4.2          │                                                                        │
    │ │            ad_model │ face_yolov8s.pt │                                                                        │
    │ │           ad_prompt │                 │                                                                        │
    │ │  ad_negative_prompt │                 │                                                                        │
    │ │ ad_controlnet_model │ None            │                                                                        │
    │ │              is_api │ False           │                                                                        │
    │ └─────────────────────┴─────────────────┘                                                                        │
    │ ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮             │
    │ │ D:\stable-diffusion-webui\extensions\adetailer\adetailer\traceback.py:139 in wrapper             │             │
    │ │                                                                                                  │             │
    │ │   138 │   │   try:                                                                               │             │
    │ │ ❱ 139 │   │   │   return func(*args, **kwargs)                                                   │             │
    │ │   140 │   │   except Exception as e:                                                             │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py:805 in postprocess_image    │             │
    │ │                                                                                                  │             │
    │ │    804 │   │   │   │   │   continue                                                              │             │
    │ │ ❱  805 │   │   │   │   is_processed |= self._postprocess_image_inner(p, pp, args, n=n)           │             │
    │ │    806                                                                                           │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py:766 in                      │             │
    │ │ _postprocess_image_inner                                                                         │             │
    │ │                                                                                                  │             │
    │ │    765 │   │   │   try:                                                                          │             │
    │ │ ❱  766 │   │   │   │   processed = process_images(p2)                                            │             │
    │ │    767 │   │   │   except NansException as e:                                                    │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\modules\processing.py:845 in process_images                            │             │
    │ │                                                                                                  │             │
    │ │    844 │   │                                                                                     │             │
    │ │ ❱  845 │   │   res = process_images_inner(p)                                                     │             │
    │ │    846                                                                                           │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\modules\processing.py:915 in process_images_inner                      │             │
    │ │                                                                                                  │             │
    │ │    914 │   │   with devices.autocast():                                                          │             │
    │ │ ❱  915 │   │   │   p.init(p.all_prompts, p.all_seeds, p.all_subseeds)                            │             │
    │ │    916                                                                                           │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\modules\processing.py:1621 in init                                     │             │
    │ │                                                                                                  │             │
    │ │   1620 │   │   │   │   if crop_region:                                                           │             │
    │ │ ❱ 1621 │   │   │   │   │   crop_region = masking.expand_crop_region(crop_region, self.width, se  │             │
    │ │   1622 │   │   │   │   │   x1, y1, x2, y2 = crop_region                                          │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\modules\masking.py:45 in expand_crop_region                            │             │
    │ │                                                                                                  │             │
    │ │   44 │                                                                                           │             │
    │ │ ❱ 45 │   ratio_crop_region = (x2 - x1) / (y2 - y1)                                               │             │
    │ │   46 │   ratio_processing = processing_width / processing_height                                 │             │
    │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯             │
    │ TypeError: unsupported operand type(s) for -: 'tuple' and 'int'
    ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯


---

0: 640x448 1 face, 9.5ms
Speed: 1.0ms preprocess, 9.5ms inference, 2.0ms postprocess per image at shape (1, 3, 640, 448)
modules\processing.py:1618 StableDiffusionProcessingImg2Img.init
    crop_region: (
        213,
        111,
        314,
        (
            213,
            111,
            314,
            209,
        ),
    ) (tuple) len=4
*** Error running postprocess_image: D:\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\scripts.py", line 897, in postprocess_image
        script.postprocess_image(p, pp, *script_args)
      File "D:\stable-diffusion-webui\extensions\adetailer\adetailer\traceback.py", line 159, in wrapper
        raise error from None
    TypeError:
    ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
    │                                                   System info                                                    │
    │ ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
    │ ┃             ┃ Value                                                                                          ┃ │
    │ ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
    │ │    Platform │ Windows-10-10.0.22631-SP0                                                                      │ │
    │ │      Python │ 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)]  │ │
    │ │     Version │ v1.9.2-113-g97fd7b24                                                                           │ │
    │ │      Commit │ 97fd7b2485f7c42757b622040ac880f66ff1dc43                                                       │ │
    │ │ Commandline │ ['launch.py', '--xformers', '--api']                                                           │ │
    │ │   Libraries │ {'torch': '2.2.2+cu121', 'torchvision': '0.17.2+cu121', 'ultralytics': '8.2.2', 'mediapipe':   │ │
    │ │             │ '0.10.11'}                                                                                     │ │
    │ └─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────┘ │
    │                                                      Inputs                                                      │
    │ ┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
    │ ┃                 ┃ Value                                                                                      ┃ │
    │ ┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
    │ │          prompt │ masterpiece, best quality, 1girl, __woman_clothes__, __places__, large breasts,            │ │
    │ │                 │ <lora:add_detail:0.5>                                                                      │ │
    │ │ negative_prompt │ (worst quality, low quality:1.1), text, title, logo, signature, (EasyNegativeV2:0.7),      │ │
    │ │                 │ (negative_hand-neg:0.7)                                                                    │ │
    │ │          n_iter │ 1                                                                                          │ │
    │ │      batch_size │ 2                                                                                          │ │
    │ │           width │ 512                                                                                        │ │
    │ │          height │ 768                                                                                        │ │
    │ │    sampler_name │ DPM++ 2M                                                                                   │ │
    │ │       enable_hr │ False                                                                                      │ │
    │ │     hr_upscaler │ Latent                                                                                     │ │
    │ │      checkpoint │ Counterfeit-V3.0_fp16-no-ema.safetensors [5998292c04]                                      │ │
    │ │             vae │ kl-f8-anime2.safetensors                                                                   │ │
    │ │            unet │ Automatic                                                                                  │ │
    │ └─────────────────┴────────────────────────────────────────────────────────────────────────────────────────────┘ │
    │                 ADetailer                                                                                        │
    │ ┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓                                                                        │
    │ ┃                     ┃ Value           ┃                                                                        │
    │ ┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩                                                                        │
    │ │             version │ 24.4.2          │                                                                        │
    │ │            ad_model │ face_yolov8s.pt │                                                                        │
    │ │           ad_prompt │                 │                                                                        │
    │ │  ad_negative_prompt │                 │                                                                        │
    │ │ ad_controlnet_model │ None            │                                                                        │
    │ │              is_api │ False           │                                                                        │
    │ └─────────────────────┴─────────────────┘                                                                        │
    │ ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮             │
    │ │ D:\stable-diffusion-webui\extensions\adetailer\adetailer\traceback.py:139 in wrapper             │             │
    │ │                                                                                                  │             │
    │ │   138 │   │   try:                                                                               │             │
    │ │ ❱ 139 │   │   │   return func(*args, **kwargs)                                                   │             │
    │ │   140 │   │   except Exception as e:                                                             │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py:805 in postprocess_image    │             │
    │ │                                                                                                  │             │
    │ │    804 │   │   │   │   │   continue                                                              │             │
    │ │ ❱  805 │   │   │   │   is_processed |= self._postprocess_image_inner(p, pp, args, n=n)           │             │
    │ │    806                                                                                           │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\extensions\adetailer\scripts\!adetailer.py:766 in                      │             │
    │ │ _postprocess_image_inner                                                                         │             │
    │ │                                                                                                  │             │
    │ │    765 │   │   │   try:                                                                          │             │
    │ │ ❱  766 │   │   │   │   processed = process_images(p2)                                            │             │
    │ │    767 │   │   │   except NansException as e:                                                    │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\modules\processing.py:845 in process_images                            │             │
    │ │                                                                                                  │             │
    │ │    844 │   │                                                                                     │             │
    │ │ ❱  845 │   │   res = process_images_inner(p)                                                     │             │
    │ │    846                                                                                           │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\modules\processing.py:915 in process_images_inner                      │             │
    │ │                                                                                                  │             │
    │ │    914 │   │   with devices.autocast():                                                          │             │
    │ │ ❱  915 │   │   │   p.init(p.all_prompts, p.all_seeds, p.all_subseeds)                            │             │
    │ │    916                                                                                           │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\modules\processing.py:1621 in init                                     │             │
    │ │                                                                                                  │             │
    │ │   1620 │   │   │   │   if crop_region:                                                           │             │
    │ │ ❱ 1621 │   │   │   │   │   crop_region = masking.expand_crop_region(crop_region, self.width, se  │             │
    │ │   1622 │   │   │   │   │   x1, y1, x2, y2 = crop_region                                          │             │
    │ │                                                                                                  │             │
    │ │ D:\stable-diffusion-webui\modules\masking.py:45 in expand_crop_region                            │             │
    │ │                                                                                                  │             │
    │ │   44 │                                                                                           │             │
    │ │ ❱ 45 │   ratio_crop_region = (x2 - x1) / (y2 - y1)                                               │             │
    │ │   46 │   ratio_processing = processing_width / processing_height                                 │             │
    │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯             │
    │ TypeError: unsupported operand type(s) for -: 'tuple' and 'int'
    ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯


---
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00,  2.51it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.98it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00,  4.11it/s]
0: 640x448 1 face, 111.8ms
Speed: 3.4ms preprocess, 111.8ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 448)
modules\processing.py:1618 StableDiffusionProcessingImg2Img.init
    crop_region: (
        155,
        39,
        397,
        272,
    ) (tuple) len=4
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00,  6.50it/s]

0: 640x448 1 face, 11.2ms
Speed: 2.0ms preprocess, 11.2ms inference, 2.1ms postprocess per image at shape (1, 3, 640, 448)
modules\processing.py:1618 StableDiffusionProcessingImg2Img.init
    crop_region: (
        123,
        70,
        380,
        325,
    ) (tuple) len=4
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00,  7.16it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:10<00:00,  1.87it/s]
modules\processing.py:1618 StableDiffusionProcessingImg2Img.init███████████████████████| 20/20 [00:10<00:00,  4.11it/s]
    crop_region: (
        178,
        146,
        321,
        (
            178,
            146,
            321,
            269,
        ),
    ) (tuple) len=4
*** Error completing request
*** Arguments: ('task(emtu1onp49wets7)', <gradio.routes.Request object at 0x00000221BD1766E0>, 2, 'masterpiece, best quality, 1girl, __woman_clothes__, __places__, large breasts, <lora:add_detail:0.5>', '(worst quality, low quality:1.1), text, title, logo, signature, (EasyNegativeV2:0.7), (negative_hand-neg:0.7)', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=512x768 at 0x222A9D31A50>, 'mask': <PIL.Image.Image image mode=RGB size=512x768 at 0x222A9D311E0>}, None, None, None, None, 4, 0, 1, 1, 1, 7.5, 1.5, 0.75, 0.0, 768, 512, 1, 0, 1, 0, 0, '', '', '', [], False, [], '', 0, 20, 'DPM++ 2M', 'Automatic', False, 1, 0.5, 4, 0, 0.5, 2, -1, False, -1, 0, 0, 0, False, '', 0.8, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion-webui\modules\img2img.py", line 232, in img2img
        processed = process_images(p)
      File "D:\stable-diffusion-webui\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\stable-diffusion-webui\modules\processing.py", line 915, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "D:\stable-diffusion-webui\modules\processing.py", line 1621, in init
        crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height)
      File "D:\stable-diffusion-webui\modules\masking.py", line 45, in expand_crop_region
        ratio_crop_region = (x2 - x1) / (y2 - y1)
    TypeError: unsupported operand type(s) for -: 'tuple' and 'int'

---
modules\processing.py:1618 StableDiffusionProcessingImg2Img.init
    crop_region: (
        178,
        146,
        321,
        (
            178,
            146,
            321,
            269,
        ),
    ) (tuple) len=4
*** Error completing request
*** Arguments: ('task(76p88mz5fyym2hp)', <gradio.routes.Request object at 0x00000221AC63C2B0>, 2, 'masterpiece, best quality, 1girl, __woman_clothes__, __places__, large breasts, <lora:add_detail:0.5>', '(worst quality, low quality:1.1), text, title, logo, signature, (EasyNegativeV2:0.7), (negative_hand-neg:0.7)', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=512x768 at 0x221ACADF820>, 'mask': <PIL.Image.Image image mode=RGB size=512x768 at 0x221ACC862C0>}, None, None, None, None, 4, 0, 1, 1, 1, 7.5, 1.5, 0.75, 0.0, 768, 512, 1, 0, 1, 0, 0, '', '', '', [], False, [], '', 0, 20, 'DPM++ 2M', 'Automatic', False, 1, 0.5, 4, 0, 0.5, 2, -1, False, -1, 0, 0, 0, False, '', 0.8, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion-webui\modules\img2img.py", line 232, in img2img
        processed = process_images(p)
      File "D:\stable-diffusion-webui\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\stable-diffusion-webui\modules\processing.py", line 915, in process_images_inner
        p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
      File "D:\stable-diffusion-webui\modules\processing.py", line 1621, in init
        crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height)
      File "D:\stable-diffusion-webui\modules\masking.py", line 45, in expand_crop_region
        ratio_crop_region = (x2 - x1) / (y2 - y1)
    TypeError: unsupported operand type(s) for -: 'tuple' and 'int'

---

Additional information

mspaint_z0IlOVQ2Rh

Would this interpretation from pylance be helpful?

Currently, the return type of the get_crop_region_v2 function is either
tuple[int, int, int, int] or
tuple[int, int, int, tuple[...]]

This is because the ternary operator evaluates before the comma.

return max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask.size[0]), min(y2 + pad, mask.size[1]) if pad else box

In the return value above, the rightmost ternary operator is evaluated first because the return value is not enclosed in parentheses.

so if pad (if pad != 0)

the return value will be

max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask.size[0]), min(y2 + pad, mask.size[1]) if pad else box

else not pad

max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask.size[0]), min(y2 + pad, mask.size[1]) if pad else box

-> max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask.size[0]), box

    x1, y1, x2, y2 = crop_region

    ratio_crop_region = (x2 - x1) / (y2 - y1)

Therefore, the subsequent process will raise TypeError because y2 is a tuple.

@Bing-su Bing-su added the bug-report Report of a bug, yet to be confirmed label Apr 22, 2024
@w-e-w w-e-w self-assigned this Apr 22, 2024
@w-e-w w-e-w mentioned this issue Apr 22, 2024
4 tasks
@w-e-w
Copy link
Collaborator

w-e-w commented Apr 22, 2024

thanks made a fix
I'm asking auto for a 1.9.3

@w-e-w
Copy link
Collaborator

w-e-w commented Apr 22, 2024

thanks again
master 1.9.3

@w-e-w w-e-w closed this as completed Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

2 participants