Skip to content
This repository has been archived by the owner on Aug 10, 2023. It is now read-only.

Error Running on gdrive #8

Closed
tmatzxzone opened this issue Oct 21, 2022 · 8 comments
Closed

Error Running on gdrive #8

tmatzxzone opened this issue Oct 21, 2022 · 8 comments

Comments

@tmatzxzone
Copy link

Running on these settings, also google drive still have 5gb free of storage (free user)
Also it gave me 2 different links: gradio and loca.it
im using ur latest updated code u just commit

image

/content/drive/MyDrive/AI/stable-diffusion-webui
Python 3.7.15 (default, Oct 12 2022, 19:14:55) 
[GCC 7.5.0]
Commit hash: 72e86948e6d73278eacc9a01974064edada58f86
Installing gfpgan
Installing clip
Cloning Stable Diffusion into repositories/stable-diffusion...
Cloning Taming Transformers into repositories/taming-transformers...
Cloning K-diffusion into repositories/k-diffusion...
Cloning CodeFormer into repositories/CodeFormer...
Cloning BLIP into repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements for Web UI
Exiting because of --exit argument
Python 3.7.15 (default, Oct 12 2022, 19:14:55) 
[GCC 7.5.0]
Commit hash: 72e86948e6d73278eacc9a01974064edada58f86
Installing xformers
your url is: https://twelve-falcons-hear-34-87-1-178.loca.lt/
OKInstalling requirements for Web UI
Launching Web UI with arguments: --xformers --share --medvram --gradio-auth ac:NovelAI
WARNING:root:Triton is not available, some optimizations will not be enabled.
Error No module named 'triton'
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Downloading: 100% 939k/939k [00:01<00:00, 693kB/s] 
Downloading: 100% 512k/512k [00:01<00:00, 465kB/s]
Downloading: 100% 389/389 [00:00<00:00, 264kB/s]
Downloading: 100% 905/905 [00:00<00:00, 571kB/s]
Downloading: 100% 4.41k/4.41k [00:00<00:00, 2.65MB/s]
Downloading: 100% 1.59G/1.59G [00:25<00:00, 67.9MB/s]
Loading weights [925997e9] from /content/drive/MyDrive/AI/stable-diffusion-webui/models/Stable-diffusion/novelAI.ckpt
Applying xformers cross attention optimization.
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings: 
Running on local URL:  http://127.0.0.1:7860/
Running on public URL: https://c8cf68b96613492c.gradio.app/

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
  0% 0/20 [00:04<?, ?it/s]
Error completing request
Arguments: ('1girl, bangs, bare shoulders, bell, black gloves', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 0, 0, '0.0001', 0.9, 5, 'None', False, '', 0.1, False, 0, False, False, None, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/ui.py", line 217, in f
    res = list(func(*args, **kwargs))
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/webui.py", line 63, in f
    res = func(*args, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/txt2img.py", line 47, in txt2img
    processed = process_images(p)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/processing.py", line 411, in process_images
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/processing.py", line 569, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.create_dummy_mask(x))
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/sd_samplers.py", line 454, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/sd_samplers.py", line 356, in launch_sampling
    return func()
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/sd_samplers.py", line 459, in <lambda>
    }, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 80, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/sd_samplers.py", line 282, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]})
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 987, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1148, in _call_impl
    result = forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1410, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 732, in forward
    h = module(h, emb, context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 85, in forward
    x = layer(x, context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 258, in forward
    x = block(x, context=context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 209, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 127, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 212, in _forward
    x = self.attn1(self.norm1(x)) + x
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/drive/MyDrive/AI/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 227, in xformers_attention_forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
TypeError: memory_efficient_attention() got an unexpected keyword argument 'attn_bias'
@tmatzxzone
Copy link
Author

Ah it's xformers error on collab, btw this Repo managed to run xformers on collab

@acheong08
Copy link
Owner

I need to update the Python version in Colab. Working on it

@acheong08
Copy link
Owner

Also it gave me 2 different links: gradio and loca.it

You can use either. Gradio has a bug that prevents the transmission of images over 5 Megabytes. loca.lt is the alternative.

@acheong08
Copy link
Owner

acheong08 commented Oct 22, 2022

Getting xformers requires a newer python version. The new version does not come with torch
Installing torch on Colab is extremely slow. Might just remove xformers as an option

@acheong08
Copy link
Owner

Ah it's xformers error on collab, btw this Repo managed to run xformers on collab

I copied some of their code and xformers works now...

@acheong08
Copy link
Owner

Never mind.

File "/content/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 227, in xformers_attention_forward
    out = xformers.ops.memory_efficient_attention(q, k, v,)
  File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 58, in memory_efficient_attention
    return torch.ops.xformers.efficient_attention(query, key, value, False)[0]
  File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 143, in __call__
    return self._op(*args, **kwargs or {})
RuntimeError: Expected query.dim() == 3 to be true, but got false.  (Could this error message be improved?  If so, please report an enhancement request to PyTorch.)

@acheong08
Copy link
Owner

Ah it's xformers error on collab, btw this Repo managed to run xformers on collab

Their repo no longer works (for me) either. Must be upstream issues with AUTOMATIC's repo

@acheong08
Copy link
Owner

I removed xformers as an option until it is fixed
AUTOMATIC1111/stable-diffusion-webui#2731

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants