Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Automatic1111 works extremelly slow if Silly Tavern is also running at the same time #15795

Open
4 of 6 tasks
guispfilho opened this issue May 15, 2024 · 0 comments
Open
4 of 6 tasks
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@guispfilho
Copy link

guispfilho commented May 15, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

I just installed Automatic 1111, and it's running smoothly. at it stays like that if I run Oobabooga text UI at the same time.
How ever, if I run Silly Tavern at the same time, the time to generate a single image goes from 10 seconds to 10-15 minutes.
I had to alter the COMMANDLINE_ARGS argument in the 'webui-user.bat' file because Auto1111's API need to me enabled to be accessed by Silly Tavern, and because Oobabooga also uses port 7860, só I had to change Forge's port for a random on, i selected 7862 for no particular reason: set COMMANDLINE_ARGS= --api --port 7862

Edit: It seems that it also gets extremly slow speeds when Oobabooga is running, dispite Silly Tavern is not running....

Steps to reproduce the problem

1- Run Auto1111
2- Generate an image directly through Auto1111 in seconds
3- Run Silly Taverns
4- DON'T connect Silly Tavern and Auto1111 via http://localhost:7860/
5- Generate a new image directly through Auto1111, without altering any settings, in seconds
6- CONNECT Silly Tavern and Auto1111 via http://localhost:7860/
7- Generate a new image directly through Auto1111, without altering any settings, takes 10-15 minutes

What should have happened?

I assume that Image generation should have kept almost the same time, maybe a few seconds slower, but not 10-15 minutes for a single image, but it seems that something is wrong with the local connection between ST and Forge.

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-05-15-04-20.json

Console logs

venv "D:\app\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --skip-torch-cuda-test --no-half-vae --listen --port=7860 --api --cors-allow-origins null --cuda-stream --cuda-malloc --pin-shared-memory
Using cudaMallocAsync backend.
Total VRAM 12282 MB, total RAM 31898 MB
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
VAE dtype: torch.bfloat16
CUDA Stream Activated:  True
Using pytorch cross attention
ControlNet preprocessor location: D:\app\stable-diffusion-webui-forge\models\ControlNetPreprocessor
[-] ADetailer initialized. version: 24.4.2, num models: 12
Loading weights [529c72f6c3] from D:\app\stable-diffusion-webui-forge\models\Stable-diffusion\mfcgPDXL_v10.safetensors
2024-05-15 02:24:41,233 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:7860
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Loading VAE weights specified in settings: D:\app\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  11081.996185302734
[Memory Management] Model Memory (MB) =  2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  7913.641487121582
Moving model(s) has taken 0.76 seconds
Model loaded in 4.0s (load weights from disk: 0.6s, forge load real models: 2.3s, calculate empty prompt: 0.9s).

To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 13.7s (prepare environment: 1.3s, import torch: 2.7s, import gradio: 0.5s, setup paths: 0.6s, other imports: 0.4s, load scripts: 3.1s, create ui: 0.4s, gradio launch: 4.3s, add APIs: 0.3s).
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  9236.397184371948
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  3315.3106899261475
Moving model(s) has taken 2.47 seconds
 87%|███████████████████████████████████████████████████████████████████████           | 13/15 [03:51<00:27, 13.67s/it]
Total progress:  87%|█████████████████████████████████████████████████████████▏        | 13/15 [02:41<00:26, 13.35s/it]

Additional information

Same problem happening with Automatic 1111 and Forge UI.

@guispfilho guispfilho added the bug-report Report of a bug, yet to be confirmed label May 15, 2024
@guispfilho guispfilho changed the title [Bug]: Automatic1111 woks extremelly slow if Silly Tavern is also running at the same time [Bug]: Automatic1111 works extremelly slow if Silly Tavern is also running at the same time May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

1 participant