Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"KeyError: 0" on img2img with any sampler other than DDIM when region prompt control is enabled #61

Closed
GodEmperor785 opened this issue Mar 24, 2023 · 12 comments

Comments

@GodEmperor785
Copy link

I have a problem when trying to use region prompt control on img2img tab with any sampler other than DDIM, I checked almost all other (Euler a, some variants of DPM++, Heun, LMS) and they all have this problem, only with DDIM region prompt control works fine.
I tried changing other parameters like sampling steps, cfg scale, denoising strength, different models, different VAEs, the new background/foreground setting in region prompt control, starting image size, switching multidiffusion to mixture of diffusers, etc.
Error happens usually around 10% of processing (but exact number seems to depend on denoising strength).
I also checked with restarting webui and my PC and also on newest commit (fb5acd0) and same problem still happens.

Here is output from console:
[Tiled Diffusion] upscaling image with R-ESRGAN 4x+ Anime6B...
Tile 1/40
Tile 2/40
Tile 3/40
Tile 4/40
Tile 5/40
Tile 6/40
Tile 7/40
Tile 8/40
Tile 9/40
Tile 10/40
Tile 11/40
Tile 12/40
Tile 13/40
Tile 14/40
Tile 15/40
Tile 16/40
Tile 17/40
Tile 18/40
Tile 19/40
Tile 20/40
Tile 21/40
Tile 22/40
Tile 23/40
Tile 24/40
Tile 25/40
Tile 26/40
Tile 27/40
Tile 28/40
Tile 29/40
Tile 30/40
Tile 31/40
Tile 32/40
Tile 33/40
Tile 34/40
Tile 35/40
Tile 36/40
Tile 37/40
Tile 38/40
Tile 39/40
Tile 40/40
Mixture of Diffusers hooked into DPM++ 2M Karras sampler. Tile size: 96x96, Tile batches: 24, Batch size: 1
[Tiled VAE]: input_size: torch.Size([1, 3, 1440, 3440]), tile_size: 1440, padding: 32
[Tiled VAE]: split to 1x3 = 3 tiles. Optimal tile size 1152x1376, original tile size 1440x1440
[Tiled VAE]: Fast mode enabled, estimating group norm parameters on 1440 x 602 image
[Tiled VAE]: Executing Encoder Task Queue: 100%|████████████████████████████████████| 273/273 [00:00<00:00, 365.22it/s]
[Tiled VAE]: Done in 1.167s, max VRAM alloc 4678.474 MB
0%| | 0/10 [00:08<?, ?it/s]
Error completing requestpling: 10%|████▊ | 24/250 [00:04<00:38, 5.94it/s]
Arguments: ('task(c4ol7ztjr3rs6qa)', 0, 'highres, masterpiece, best quality, ultra-detailed 8k wallpaper, extremely clear, very clear, ultra-clear', 'lowres, pixelated, deformed, blur, nude, horns, fat, blurry, poorly drawn, conjoined, poorly drawn hands, extra limb, extra finger, floating limbs, disconnected limbs, bad anatomy, ((((mutated hands and fingers)))), disjoined, worst quality, low quality', [], <PIL.Image.Image image mode=RGBA size=1720x720 at 0x26816067B50>, None, None, None, None, None, None, 38, 15, 4, 0, 1, False, False, 1, 1, 12, 1.5, 0.25, -1.0, -1.0, 0, 0, 0, False, 720, 1720, 0, 0, 32, 0, '', '', '', [], 0, 0, 4, 512, 512, True, 'None', 'None', 0, 0, 0, 0, True, 'Mixture of Diffusers', False, False, 1024, 1024, 96, 96, 48, 1, 'R-ESRGAN 4x+ Anime6B', 2, False, True, True, True, 0.5395348837209298, 0.07499999999999991, 0.07267441860465121, 0.14583333333333318, 'beautiful evil mad young woman face, evil smile, red evil eyes, masterpiece, best quality', 'lowres, pixelated, deformed, blur, blurry, poorly drawn, worst quality, low quality', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, True, True, True, True, 0, 1440, 192, False, '', 0, <scripts.external_code.ControlNetUnit object at 0x0000026815FECDF0>, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Pooling Max', False, 'Lerp', '', '', False, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, 50, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\img2img.py", line 171, in img2img
processed = process_images(p)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\processing.py", line 1054, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\auto1111_sd_webui\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "E:\auto1111_sd_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 125, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]})
File "E:\auto1111_sd_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\mixtureofdiffusers.py", line 157, in apply_model
x_tile_out = self.custom_apply_model(x_tile, t_in, c_in, bbox_id, bbox, bbox.prompt, bbox.neg_prompt)
File "E:\auto1111_sd_webui\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\mixtureofdiffusers.py", line 82, in custom_apply_model
return self.kdiff_custom_forward(x_in, c_in, cond, uncond, bbox_id, bbox,
File "E:\auto1111_sd_webui\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\abstractdiffusion.py", line 315, in kdiff_custom_forward
tensor = self.tensor[bbox_id]
KeyError: 0

Here are is example of my settings:
settings

Is this expected that only DDIM works for region prompt control? Or am I doing something wrong?

Also it seems that without "Draw tiles as background (SLOW but save VRAM)" checked on region prompt control the process finishes very fast but result image is mostly noise apart from marked region. Should this setting be always used?

@pkuliyi2015
Copy link
Owner

Thanks for your feedback.

As for the option problem, if you are not going to enable the tile drawing you need to use your own regions to fill the whole canvas.

As for The kdiff Problem I will fix it immediately.

@GodEmperor785
Copy link
Author

Great, thanks for quick response!

@pkuliyi2015
Copy link
Owner

Try to fix it already. Please have a test.

@GodEmperor785
Copy link
Author

Unfortunately same issue still happens, I checked on newest commit 5f0c449, test was made with same settings as before (checked DPM++ 2M Karras and Euler a):
Traceback (most recent call last):
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\img2img.py", line 171, in img2img
processed = process_images(p)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\processing.py", line 1054, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\auto1111_sd_webui\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "E:\auto1111_sd_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 125, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]})
File "E:\auto1111_sd_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "E:\auto1111_sd_webui\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\mixtureofdiffusers.py", line 157, in apply_model
x_tile_out = self.custom_apply_model(x_tile, t_in, c_in, bbox_id, bbox, bbox.prompt, bbox.neg_prompt)
File "E:\auto1111_sd_webui\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\mixtureofdiffusers.py", line 82, in custom_apply_model
return self.kdiff_custom_forward(x_in, c_in, cond, uncond, bbox_id, bbox,
File "E:\auto1111_sd_webui\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\abstractdiffusion.py", line 315, in kdiff_custom_forward
tensor = self.tensor[bbox_id]
KeyError: 0

@pkuliyi2015
Copy link
Owner

pkuliyi2015 commented Mar 24, 2023

Have you tried delete the extension and reinstall? This seems to be a cache error. And refresh the page please.

@GodEmperor785
Copy link
Author

I just tried that, I deleted the directory for this extension (in extensions/) and reinstalled, restarted the webui after reinstall, I also tried with restarting browser, there is still "KeyError: 0". Is there anything else needed to do to clear cache/reinstall?

@pkuliyi2015
Copy link
Owner

pkuliyi2015 commented Mar 24, 2023

I'm checking in detail with your settings. Can you provide a screenshot of your checkpoint, prompt, and neg prompt as well?

Update: Do you use --lowvram or --medvram?

@GodEmperor785
Copy link
Author

Here is the screenshot:
image
Link to model checkpoint: https://civitai.com/models/14734/store-bought-gyoza-mix
This is v1.2 of that model. Also last 2 things in negative prompt are embeddings, but they shouldn't matter because I just tried without them and got same error.
Exact model also shouldn't matter as I also just tried on base SD 1.5 (sd-v1-5-pruned-emaonly 4GB) and also got the error

@pkuliyi2015
Copy link
Owner

Thank you very much. I successfully reproduce your problem with --medvram. Will fix this immediately.

@GodEmperor785
Copy link
Author

Yes, I'm using --medvram to be able to generate bigger images

@pkuliyi2015
Copy link
Owner

I spend a lot of time on fixing this, please try again. This time it should work perfectly I think.

@GodEmperor785
Copy link
Author

It works now with other samplers, thanks for fixing this!
I'll close the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants