Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

too many values to unpack (expected 3) #1437

Closed
Adel1525 opened this issue May 26, 2023 · 7 comments
Closed

too many values to unpack (expected 3) #1437

Adel1525 opened this issue May 26, 2023 · 7 comments

Comments

@Adel1525
Copy link

When i tried to use controlnet after the last update in img2img it doesn't work at all and always give me this error "too many values to unpack (expected 3)"

@Adel1525 Adel1525 reopened this May 26, 2023
@ljleb
Copy link
Collaborator

ljleb commented May 26, 2023

Please share the entire stack trace of the error. Ideally you should not bypass the bug report format, as it makes it harder for maintainers to understand what the actual problem is.

@Semanual
Copy link

@ljleb since I have the exact same error, I'm posting my logs here:

Arguments: ('task(9mdhnpg0j7atr0c)', 'an overjoyed girl in a black leotard, pink coat, pink skirt and cat ears is standing in front of a lightning background with her hands up, green eyes, black collar, Chizuko Yoshida, an anime drawing, shock art, lightning', 'easynegative', [], 20, 0, False, False, 1, 4, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <controlnet.py.UiControlNetUnit object at 0x7fc7e7db1460>, <controlnet.py.UiControlNetUnit object at 0x7fc7dc8032e0>, <controlnet.py.UiControlNetUnit object at 0x7fc7e827b2e0>, <controlnet.py.UiControlNetUnit object at 0x7fc7e7da4040>, <controlnet.py.UiControlNetUnit object at 0x7fc7e7db1bb0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
  File "/home/semanual/ssd/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "/home/semanual/ssd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/home/semanual/ssd/stable-diffusion-webui/modules/processing.py", line 526, in process_images
    res = process_images_inner(p)
  File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/modules/processing.py", line 680, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 269, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/modules/processing.py", line 907, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 135, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/home/semanual/ssd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 535, in forward_webui
    return forward(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 374, in forward
    control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context)
  File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 99, in forward
    return self.control_model(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 358, in forward
  File "/home/semanual/ssd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 344, in align
    b, c, h1, w1 = hint.shape
ValueError: too many values to unpack (expected 3)

@an0thertruth
Copy link

Same problem here:

Pixel Perfect Mode Enabled.

resize_mode = ResizeMode.INNER_FIT
raw_H = 1200
raw_W = 800
target_H = 1200
target_W = 800
estimation = 800.0
preprocessor resolution = 800
Loading model from cache: control_v11p_sd15_openpose [cab727d4]
Loading preprocessor: openpose_full
Pixel Perfect Mode Enabled.
resize_mode = ResizeMode.INNER_FIT
raw_H = 1248
raw_W = 976
target_H = 1200
target_W = 800
estimation = 938.4615384615385
preprocessor resolution = 938
0%| | 0/30 [00:00<?, ?it/s]ControlNet used torch.float16 VAE to encode torch.Size([1, 4, 150, 100]).
0%| | 0/30 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(pw0n1jxftrzv5y3)', '(((masterpiece,best quality))),cyberpunk clothes,(1girl),pink eyes,3D,1girl,long hair,small breasts, hoodie, mini skirt, pink hair,upper body,side-tie,tight,outdoors,midriff,(looking at viewer, smile),Potrait,anime skiny girl with headphones,digital cyberpunk anime art, cyberpunk anime girl, digital cyberpunk anime art,cyberpunk city background, nightcore, anime moe artstyle, anime girl of the future, tech shoes,ultra-detailed, absurdres, solo, volumetric lighting, best quality, intricate details, sharp focus, hyper detailed lora:3DMM_V10:1', '(EasyNegative), (badhandv4), blur, blurry, blurry image, extra legs, extra hands, (((bad hands))), (((bad fingers))), (((extra fingers))), conjoined bodies', [], 30, 16, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 1200, 800, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <controlnet.py.UiControlNetUnit object at 0x0000024169DA87F0>, <controlnet.py.UiControlNetUnit object at 0x0000024169DA82B0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\processing.py", line 526, in process_images
res = process_images_inner(p)
File "C:\Stable Diffusion\STABLEDIFUSSION\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\processing.py", line 680, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "C:\Stable Diffusion\STABLEDIFUSSION\extensions\sd-webui-controlnet\scripts\hook.py", line 269, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\processing.py", line 907, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\sd_samplers_kdiffusion.py", line 377, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
return func()
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\sd_samplers_kdiffusion.py", line 377, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Stable Diffusion\STABLEDIFUSSION\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\Stable Diffusion\STABLEDIFUSSION\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\sd_samplers_kdiffusion.py", line 154, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
File "C:\Stable Diffusion\STABLEDIFUSSION\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\Stable Diffusion\STABLEDIFUSSION\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\Stable Diffusion\STABLEDIFUSSION\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\Stable Diffusion\STABLEDIFUSSION\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\extensions\sd-webui-controlnet\scripts\hook.py", line 535, in forward_webui
return forward(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\extensions\sd-webui-controlnet\scripts\hook.py", line 374, in forward
control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context)
File "C:\Stable Diffusion\STABLEDIFUSSION\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\extensions\sd-webui-controlnet\scripts\cldm.py", line 99, in forward
File "C:\Stable Diffusion\STABLEDIFUSSION\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Stable Diffusion\STABLEDIFUSSION\extensions\sd-webui-controlnet\scripts\cldm.py", line 358, in forward

File "C:\Stable Diffusion\STABLEDIFUSSION\extensions\sd-webui-controlnet\scripts\cldm.py", line 344, in align


ValueError: too many values to unpack (expected 3)

@Semanual
Copy link

Seems like I solved it, by generating a random image with this option enabled and both the preprocessor and model of openpose selected
image
It downloaded something from lllyasviel github and the error didn't happen again

@an0thertruth
Copy link

@Semanual I tried the same and my problem was also resolved, thank you very much!

@evoluder
Copy link

I ran into this too, generating w/ openpose did indeed download some new packages but the problem persists.

@lllyasviel
Copy link
Collaborator

when updating to latest version (1.1.197) do not forget to restart terminal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants