Skip to content
This repository has been archived by the owner on May 21, 2023. It is now read-only.

The problem sometimes does not show photos #9

Closed
tranthai2k2 opened this issue Jan 25, 2023 · 17 comments
Closed

The problem sometimes does not show photos #9

tranthai2k2 opened this issue Jan 25, 2023 · 17 comments

Comments

@tranthai2k2
Copy link

tranthai2k2 commented Jan 25, 2023

Sorry if the wrong position
i make pictures with lora
it works great your model is great
However, I don't know why your model is so good that it doesn't show the image even though in colab finished creating the image, but in the web, there is no image, hope you can help me
https://colab.research.google.com/drive/1iwLtfEeoUTTVFZ08iVkvJ5jBhwKcspty?usp=sharing
I just tweaked it to make it easier for me but the picture doesn't show up

@misobarisic
Copy link
Owner

The issue seems to be a temporary regression within gradio or the webui itself. Could you try changing the last line of run_webui to this !COMMANDLINE_ARGS="{other_args} {vae_args} {vram} --gradio-queue --gradio-auth {gradio_username}:{gradio_password}" REQS_FILE="requirements.txt" python launch.py

@tranthai2k2
Copy link
Author

image

@misobarisic
Copy link
Owner

This works for me

def run_webui():
  #@markdown Choose the vae you want
  vae = "Anime (Anything 4)" #@param ["Anime (Anything 3)", "Anime (Anything 4)", "Anime (Waifu Diffusion 1.4)", "Stable Diffusion", "None"]
  
  if vae == "Anime (Anything 3)":
    !wget -c https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0.vae.pt
    vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0.vae.pt"
  if vae == "Anime (Anything 4)":
    !wget -c https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.0.vae.pt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0.vae.pt
    vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0.vae.pt"
  if vae == "Anime (Waifu Diffusion 1.4)":
    !wget -c https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime.ckpt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/kl-f8-anime.vae.pt
    vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/kl-f8-anime.vae.pt"
  if vae == "Stable Diffusion":
    !wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5.vae.pt
    vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5.vae.pt"

  %cd {root_dir}/stable-diffusion-webui/
  !COMMANDLINE_ARGS="{other_args} {vae_args} {vram} --gradio-queue --gradio-auth {gradio_username}:{gradio_password}" REQS_FILE="requirements.txt" python launch.py

@tranthai2k2
Copy link
Author

tranthai2k2 commented Jan 25, 2023

i think the problem is with !git clone https://github.com/acheong08/stable-diffusion-webui
it's full but not stable and camenduru's it's stable but can't add autotag nor Lora and acheong08's it's unstable but it's both lora and autotag
I'm sorry the code you edited and added is awesome it's stable and awesome if yes I'll recommend to my friends about your model it's awesome

@misobarisic
Copy link
Owner

misobarisic commented Jan 25, 2023

Mine clones the latest commit from the A1111 webui repo 🤔

From what I can see, acheong08 has not pushed any commits of his own recently. He's been merging upstream changes

@misobarisic
Copy link
Owner

I have removed --gradio-queue since somebody else also reported an issue right after I pushed the fix.

image

@tranthai2k2
Copy link
Author

https://colab.research.google.com/drive/1iwLtfEeoUTTVFZ08iVkvJ5jBhwKcspty?usp=sharing#scrollTo=fAsaOpxoT-PC
This model I tweaked to my liking, it's quite stable, but if possible, I still hope your model is updated to help stabilize the image export so it won't fail without image.
Thank you for your model hope to have more models

@tranthai2k2
Copy link
Author

tranthai2k2 commented Jan 25, 2023

image
image
image
image
Even after creating an image, it still can't be retrieved, but the old image has been created

@misobarisic
Copy link
Owner

I am not experiencing such an issue. Hmm

Can you see the images in the gallery tab?

@tranthai2k2
Copy link
Author

tranthai2k2 commented Jan 26, 2023

my error times

  • When doing it over and over again, it will fail and can't create an image, forcing you to reset the web or break the session
    +when height or width >600
  • when creating too many images usually 4 or more will get an error sometimes it will be fine with 6 images but more will definitely have problems
  • many quotes create pictures but no pictures like the one I sent
    if yes can you try to check my colab link to see if it can be fixed
    https://colab.research.google.com/drive/1iwLtfEeoUTTVFZ08iVkvJ5jBhwKcspty?usp=sharing#scrollTo=fAsaOpxoT-PC
    image
    image

@misobarisic
Copy link
Owner

A111#6898

@misobarisic
Copy link
Owner

misobarisic commented Jan 26, 2023

I could reproduce the issue using your notebook and it was solved by adding --gradio-queue to launch args

@misobarisic
Copy link
Owner

I've pushed a new commit using gradio queue with tag complete extensions checkbox and lora as well. Test it out

@tranthai2k2
Copy link
Author

Error completing request
Arguments: ('task(mwkjn3p8ul6fi5w)', 'masterpiece, best quality, twintails, wide sleeves, hands on hips, hand on hip, breasts, 1girl, dress, solo, clothing cutout, thighhighs, cleavage, chinese clothes, rating:safe, pelvic curtain, mole on breast, large breasts, mole on thigh , black hair, china dress, blush, smile, cleavage cutout, short hair, bare shoulders, blue sky, looking at viewer, covered navel, no panties, focused, upright, thigh-high, opposite, volumetric light, good light,, masterpiece, best quality, very detailed, wallpaper 8k cg unity extremely detailed, illustrations,((beautifully detailed) face) ), best quality, (((super detailed ))) , high quality, high resolution illustrations, high resolution , side light, ((best illustration)), high resolution, illustration, absurd, super detailed, intricate detail, perfect , highly detailed eyes ,yellow eyes, perfect light, (CG:1.2 color is extremely detailed),((bangs covering one eye))', 'nsfw, loli, small breasts, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, worst quality, low quality, (worst quality, low quality, extra digits, loli, loli face:1.3)', [], 32, 16, False, False, 2, 2, 7, 324660525.0, -1.0, 0, 0, 0, False, 648, 584, False, 0.7, 2, 'Latent', 0, 0, 0, 0, False, False, False, False, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/txt2img.py", line 52, in txt2img
processed = process_images(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 476, in process_images
res = process_images_inner(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 614, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/stable-diffusion-webui/modules/processing.py", line 809, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 544, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 447, in launch_sampling
return func()
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 544, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 553, in sample_dpmpp_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 350, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [tensor[a:b]], "c_concat": [image_cond_in[a:b]]})
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
x = layer(x, context)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 324, in forward
x = block(x, context=context[i])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 259, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 129, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 262, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 309, in xformers_attention_forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v))
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/init.py", line 203, in memory_efficient_attention
return _memory_efficient_attention(
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/init.py", line 299, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/init.py", line 315, in _memory_efficient_attention_forward
op = _dispatch_fw(inp)
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/dispatch.py", line 95, in _dispatch_fw
return _run_priority_list(
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/dispatch.py", line 70, in _run_priority_list
raise NotImplementedError(msg)
image
hey bro it's not even better

@misobarisic
Copy link
Owner

misobarisic commented Jan 26, 2023

The UI at least starts, doesn't throw an error because --gradio-queue is present. This is an issue with xformers then. I noticed your initial version of the notebook you sent used xformers 0.0.15 whereas mine was updated to 0.0.16 recently. Might be worth trying to use the previous version then.

!pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.16/xformers-0.0.16+814314d.d20230118-cp38-cp38-linux_x86_64.whl -->
!pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15+e163309.d20230103-cp38-cp38-linux_x86_64.whl

Or you could just disable xformers since I cannot guarantee it will work.

@tranthai2k2
Copy link
Author

tranthai2k2 commented Jan 27, 2023

I don't know why after I press run in drive it works very smoothly and has a very good image but I haven't pressed it before but it doesn't show the image
If possible, you can adjust it to save only images, don't run in drive, it saves all the files to the drive, so it's a bit heavy
it stays the same unless run in drive
image

@misobarisic
Copy link
Owner

The issue is still out of my control. Though I will add an option for just saving images to gdrive nonetheless.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants