Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version 2 - Development #26

Closed
cmdr2 opened this issue Aug 29, 2022 · 24 comments
Closed

Version 2 - Development #26

cmdr2 opened this issue Aug 29, 2022 · 24 comments

Comments

@cmdr2
Copy link
Collaborator

cmdr2 commented Aug 29, 2022

A development version of v2 is available for Windows 10/11 and Linux. Experimental support for Mac will be added soon.

The instructions for installing are at: https://github.com/cmdr2/stable-diffusion-ui/blob/v2/README.md#installation

It is not a binary, and the source code used for building this is open at https://github.com/cmdr2/stable-diffusion-ui/tree/v2

What is this?

This version is a 1-click installer. You don't need WSL or Docker or Python or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all.

It'll download the necessary files from the original Stable Diffusion git repository, and set it up. It'll then start the browser-based interface like before.

An NSFW option is present in the interface, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.

Is it stable?

It has run successfully for a number of users, but I would love to know if it works on more computers. Please let me know if it works or fails in this thread, it'll be really helpful! Thanks :)

PS: There's a new Discord server for support and development discussions: https://discord.com/invite/u9yhsFmEkB . Please join in for faster discussion and feedback on v2.

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Aug 29, 2022

@onduboy @UrielCh @ChrisAcrobat @zaqxs123456 @Sparkenstein I've uploaded a Windows build for the new version (instructions above). It has an option for NSFW, to work around incorrect NSFW flagging (#23).

It is very very much a beta version, because it is under development. So if you get a chance to try it out, please let me know if it crashes or fails to install. Thanks! :)

@UrielCh
Copy link

UrielCh commented Aug 30, 2022

image

@UrielCh
Copy link

UrielCh commented Aug 30, 2022

Tried after a reboot with:

$Env:PYTORCH_CUDA_ALLOC_CONF = "max_split_size_mb:4096"
.\stable-diffusion-ui.cmd > logs2.txt 2>&1

no effect

@UrielCh
Copy link

UrielCh commented Aug 30, 2022

 nvidia-smi
Tue Aug 30 11:30:57 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 512.77       Driver Version: 512.77       CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ... WDDM  | 00000000:05:00.0  On |                  N/A |
|  0%   57C    P5    17W / 125W |   1109MiB /  6144MiB |     12%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

@UrielCh
Copy link

UrielCh commented Aug 30, 2022

Full stacktrace

Traceback (most recent call last):
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 83, in image
    res: Response = runtime.mk_img(r)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 121, in mk_img
    x_samples = _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 147, in _txt2img
    samples_ddim, _ = sampler.sample(S=opt_ddim_steps,
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\plms.py", line 97, in sample
    samples, intermediates = self.plms_sampling(conditioning, size,
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\plms.py", line 152, in plms_sampling
    outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\plms.py", line 218, in p_sample_plms
    e_t = get_model_output(x, t)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\plms.py", line 185, in get_model_output
    e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\ddpm.py", line 987, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\ddpm.py", line 1410, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 737, in forward
    h = module(h, emb, context)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 83, in forward
    x = layer(x, emb)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 250, in forward
    return checkpoint(
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\modules\diffusionmodules\util.py", line 127, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "c:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 274, in _forward
    h = self.out_layers(h)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
    input = module(input)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\conv.py", line 447, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\ml\cmdr2\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\conv.py", line 443, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 6.00 GiB total capacity; 5.16 GiB already allocated; 0 bytes free; 5.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Related issue: CUDA out of memory

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Aug 30, 2022

@UrielCh Thanks for trying this out. Yeah it definitely looks like the error you linked to.

Like I mentioned in #23, I'm looking at basujindal's fork of Stable Diffusion, with the intention of using that fork in v2 instead of the stock Stable Diffusion (used currently). That might help reduce the VRAM usage.

@BleedingDev
Copy link

Hi,
I tried to run this version, but even after some time it is not connected to the backend. I get the error:

Traceback (most recent call last):
  File "C:\Users\pegak\Downloads\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 54, in ping
    from sd_internal import runtime
  File "C:\Users\pegak\Downloads\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
    import cv2
ModuleNotFoundError: No module named 'cv2'

But it correctly opens the webpage. I have latest Windows 11, Powershell 7.3.0.

@oneandonlyjason
Copy link

oneandonlyjason commented Aug 31, 2022

Hi,

i get following Error when i try to Install it

  File "scripts\txt2img.py", line 5, in <module>
    from omegaconf import OmegaConf
  File "C:\Users\Jason\Desktop\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\omegaconf\__init__.py", line 1, in <module>
    from .base import Container, DictKeyType, Node, SCMode
  File "C:\Users\Jason\Desktop\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\omegaconf\base.py", line 9, in <module>
    from antlr4 import ParserRuleContext
ModuleNotFoundError: No module named 'antlr4'

The Website Opens normally

Edit:
After deleting the Folder and run the Installer again, everything worked now. So maybe it needs a few more Checks if everything was installed correctly.

Sadly i dont seem to be able to run this on my GPU?
I get following Error:

Error: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.14 GiB already allocated; 0 bytes free; 7.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@BleedingDev
Copy link

How do you delete the folder and rerun the installer? I got the ZIP file and then unpacked its content. :)

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Aug 31, 2022

@pegak You can delete the "stable-diffusion" folder (it'll be next to the "installer" folder and stable-diffusion-ui.cmd file), and re-run the stable-diffusion-ui.cmd file.

@pegak @oneandonlyjason Sorry about this issue, I'll modify the installer code to re-check the dependencies at the end. It seems like it didn't download some dependencies, not sure why. But I'll add a check at the end to fix any missing dependencies.

Thanks for testing this out, and letting me know! :)

@oneandonlyjason
Copy link

I dont know how much this is in the Scope of This Project (Maybe when the Fork instead of the Original Code is used?) but can you implement other Sampling Methods that you can maybe Select in the Panel? https://github.com/crowsonkb/k-diffusion

Other Projects like NightcafeStudio use KLMS instead of PLMS as Default Sampler and when i compare the Outputs from both Samplings i seem to get a lot Better Outputs with it most of the Time

KLMS:

image

PLMS:

image

Both Images use the Same seed and same Prompt

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Aug 31, 2022

@oneandonlyjason Thanks, that looks interesting! Do you have a link handy for running it locally? The repo only shows an example for training, but I haven't looked closer. Thanks!

@oneandonlyjason
Copy link

I did not setup it locally yet and havent really found out how because i dont know enough about Python. But i found this Google AI-Notebook. Maybe the Code in this Helps? https://colab.research.google.com/github/pharmapsychotic/ai-notebooks/blob/main/pharmapsychotic_Stable_Diffusion.ipynb#scrollTo=bcHsbr3hblrk

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Sep 1, 2022

@pegak @oneandonlyjason - I've updated the v2 version with a new build. This checks for missing dependencies at the end and tries to fix them. Hopefully that should resolve the problem. You can download it from https://drive.google.com/file/d/1MY5gzsQHV_KREbYs3gw33QL4gGIlQRqj/view?usp=sharing and then double-click Start Stable Diffusion UI.cmd to install and run.

Please let me know if there are any problems after running it. Thanks!

PS: There's a new Discord server for support and development discussions: https://discord.com/invite/u9yhsFmEkB . Please join in for faster discussion and feedback on v2.

@mkuitune
Copy link

mkuitune commented Sep 1, 2022

Hi! You requested to get error reports.
For me I get the following error

D:\App\StableDiffusion\stable-diffusion-ui>installer\Scripts\activate.bat
conda 4.14.0
git version 2.34.1.windows.1

(installer) D:\App\StableDiffusion\stable-diffusion-ui\installer\etc\conda\activate.d>cd D:\App\StableDiffusion\stable-diffusion-ui\installer..\scripts

(installer) D:\App\StableDiffusion\stable-diffusion-ui\scripts>on_env_start.bat

"Stable Diffusion UI"

"Ready to rock!"

started in D:\App\StableDiffusion\stable-diffusion-ui\stable-diffusion
←[32mINFO←[0m: Started server process [←[36m17244←[0m]
←[32mINFO←[0m: Waiting for application startup.
←[32mINFO←[0m: Application startup complete.
←[32mINFO←[0m: Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
←[32mINFO←[0m: 127.0.0.1:53187 - "←[1mGET / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:53187 - "←[1mGET /modifiers.json HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:53187 - "←[1mGET /output_dir HTTP/1.1←[0m" ←[32m200 OK←[0m
Loading model from sd-v1-4.ckpt
Global Step: 470000
Traceback (most recent call last):
File "D:\App\StableDiffusion\stable-diffusion-ui\stable-diffusion..\ui\server.py", line 64, in ping
runtime.load_model(ckpt_to_use="sd-v1-4.ckpt")
File "D:\App\StableDiffusion\stable-diffusion-ui\stable-diffusion..\ui\sd_internal\runtime.py", line 70, in load_model
model = instantiate_from_config(config.modelUNet)
File "D:\App\StableDiffusion\stable-diffusion-ui\stable-diffusion\ldm\util.py", line 85, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\App\StableDiffusion\stable-diffusion-ui\stable-diffusion\ldm\util.py", line 93, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
File "D:\App\StableDiffusion\stable-diffusion-ui\stable-diffusion\env\lib\importlib_init_.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 783, in exec_module
File "", line 219, in _call_with_frames_removed
File "D:\App\StableDiffusion\stable-diffusion-ui\stable-diffusion\optimizedSD\ddpm.py", line 15, in
from ldm.models.autoencoder import VQModelInterface
File "D:\App\StableDiffusion\stable-diffusion-ui\stable-diffusion\ldm\models\autoencoder.py", line 6, in
from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer
ModuleNotFoundError: No module named 'taming'

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Sep 2, 2022

Hi @mkuitune sorry about that, and thanks for reporting.

Can you please open in notepad the D:\App\StableDiffusion\stable-diffusion-ui\scripts\on_env_start.bat file, and change line 5 to @set new_install=T (instead of F)

And then save and double-click the Start Stable Diffusion UI.cmd file inside D:\App\StableDiffusion\stable-diffusion-ui.

Please paste the output here. It'll try to fix your installation, and show any errors while fixing. Thanks!

@rc1
Copy link

rc1 commented Sep 2, 2022

This is great. Once I have used the installer, what it the best way to get the latest changes?
Also - how can I change the host and port of the server?

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Sep 2, 2022

Hi @rc1 - a new version is available at https://drive.google.com/file/d/1MY5gzsQHV_KREbYs3gw33QL4gGIlQRqj/view?usp=sharing

This version will receive automatic updates in the future. It'll auto-update each time you start it. The updates are very small, so it won't add more than a second or two to the start. Please let me know if there are any problems. Thanks

PS: Port and host isn't configurable right now, is it conflicting with something you're running?

@rc1
Copy link

rc1 commented Sep 2, 2022

Thanks @cmdr2. So I should delete and reinstall. That's no problem.

I'd like to change the host to 0.0.0.0 to make the UI accessible from my other computer. I wonder even if there was somewhere I could ninja edit the script to change it but I am new to windows scripting, so nothing jumped out as obvious (I didn't spend too long)

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Sep 2, 2022

Hi @rc1 the new update already sets the host to 0.0.0.0, so I think you should be set. Yes, please download the version I linked previously, and it should auto-download any updates in the future.

Please let me know if there are any problems, thanks!

PS: There's a new Discord server for support and development discussions: https://discord.com/invite/u9yhsFmEkB . Please join in for faster discussion and feedback on v2.

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Sep 3, 2022

Update: a Linux installer for v2 is now available in beta. Instructions for installing it are at: https://github.com/cmdr2/stable-diffusion-ui/blob/v2/README.md#installation

@mflux
Copy link

mflux commented Sep 3, 2022

I'm getting

Error: CUDA out of memory. Tried to allocate 7.03 GiB (GPU 0; 8.00 GiB total capacity; 5.22 GiB already allocated; 0 bytes free; 5.82 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

However it ran fine on docker. Is there a way I can modify memory allocation?

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Sep 5, 2022

Hi @mflux - Can you please check if you're using an initial image that's larger than your output image? For e.g. please use 512x512 for your initial image, if that's your desired output image.

Larger initial images can cause out of memory errors. I'm working on catching this in the UI.

Another suggestion is to disable "Turbo Mode" in the "Advanced Settings", since it'll reduce VRAM usage by 1GB.

Please let me know if this works out (or doesn't). Thanks!

@cmdr2
Copy link
Collaborator Author

cmdr2 commented Sep 5, 2022

Update: the v2 branch has been integrated into main, so v1 (the docker approach) is officially dead, and v2 is the main version of the software now.

Thanks everyone for testing this (here and on Discord), I really appreciate it! :) An experimental support for Mac is pending, I'm continuing to look into it on the side. But closing this issue for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants