Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] Can't work with controlnet at the same time #73

Closed
JoeyLearnsToCode opened this issue May 28, 2023 · 12 comments
Closed

[bug] Can't work with controlnet at the same time #73

JoeyLearnsToCode opened this issue May 28, 2023 · 12 comments

Comments

@JoeyLearnsToCode
Copy link

JoeyLearnsToCode commented May 28, 2023

While working with controlnet, it crash when images generating comes to face editor phase (which is the end of whole job).
If I disable controlnet, face editor works well, and vice versa, If I disable face editor, controlnet works well.

I'll post the error message later.

@JoeyLearnsToCode
Copy link
Author

Error running postprocess: D:\APPs\stable-diffusion\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor_extension.py
Traceback (most recent call last):
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\scripts.py", line 478, in postprocess
script.postprocess(p, processed, *script_args)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor_extension.py", line 98, in postprocess
script.proc_images(mask_model, detection_model, o, res,
File "D:\APPs\stable-diffusion\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 321, in proc_images
proc = self.__proc_image(p, mask_model, detection_model,
File "D:\APPs\stable-diffusion\stable-diffusion-webui\extensions\sd-face-editor\scripts\face_editor.py", line 417, in __proc_image
proc = process_images(p)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 611, in process_images
res = process_images_inner(p)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 729, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 290, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 1262, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
return func()
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 356, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\APPs\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 169, in forward
devices.test_for_nans(x_out, "unet")
File "D:\APPs\stable-diffusion\stable-diffusion-webui\modules\devices.py", line 156, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Total progress: 100%|██████████████████████████████████████████████████████████████████| 23/23 [00:24<00:00, 1.07s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 23/23 [00:05<00:00, 3.96it/s]

@JoeyLearnsToCode
Copy link
Author

As you can see, the whole generation crash at face editor phase.

@ototadana
Copy link
Owner

@JoeyLearnsToCode
Thanks for the detailed error information.

After updating ControlNet to the latest version I too could see this error. I will investigate the cause and solution.

@VenomSnake01
Copy link

The problem i met with when enabling CN and SD-Face-editor at the same time is not a crashed phase, but just not working. No optimized faces output.

@ototadana
Copy link
Owner

I understand that it stopped working due to this commit, but I still do not understand why it stopped working properly 🤔

Mikubill/sd-webui-controlnet#1337

@ototadana
Copy link
Owner

@VenomSnake01
Thanks!
It seems that depending on the startup options of the Web UI, it may not work correctly without error.

@ototadana
Copy link
Owner

@JoeyLearnsToCode @VenomSnake01
The error has been fixed.
Please try it!

@JoeyLearnsToCode
Copy link
Author

不再会崩溃了,但是仍然有问题:
当我同时启用 CN 和 faceeditor ,并且选择只生成一张图,会产生好几张图,并且保存的图不正确(不是我预期的那张)。

It doesn't crash anymore, but there is still a problem:
When I enable both CN and face editor and choose to generate only one image, several images are generated and the saved one is wrong (not the one I expected).

image

@ototadana
Copy link
Owner

@JoeyLearnsToCode
Thanks for giving it a quick try!

This is the expected result.

  • The first image is before the Face Editor (not saved by default).
  • The third image is after the Face Editor has processed it (it is saved).
  • The other images are images created by ControlNet, but they are only displayed on the screen and are not saved, so you do not need to worry about them.

In some cases, it may be preferable to use the image before the Face Editor modifies it. For such cases, there is a "Save original image" option. If you enable this option, you can save the original image (first one) too, so you can choose the image that is more desirable to you.

(Since Face Editor works on the " zoom in on a face image and redraw it in detail" mechanism, it tends to generate more realistic images. That can be counterproductive if you want an anime-like image. Note that when Face Editor processes the image, it uses the same checkpoint as the original image.)

@JoeyLearnsToCode
Copy link
Author

JoeyLearnsToCode commented May 30, 2023

Oh I see now, that sounds acceptable.
One more question though:

Note that when Face Editor processes the image, it uses the same checkpoint as the original image.

What if original image is not a SD generated image at all (like a real photo)? Which checkout will be using?
I am expecting it to be the current choosen checkpoint, is it technically possible?

@ototadana
Copy link
Owner

@JoeyLearnsToCode

Sorry. The expression is a little confusing.

A more accurate expression would be "use the currently selected checkpoint in the Web UI as it is".

Note that when Face Editor processes the image, it uses the same checkpoint as the original image.

The intent of the above statement is as follows:

When using Face Editor with txt2img tab, the image first created by txt2img + ControlNet and the image modified by Face Editor are created using the same checkpoint.

@JoeyLearnsToCode
Copy link
Author

Now it's clear enough, thx :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants