We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
when star:
找不到llama_cpp模块(Can't find the llama_cpp module)
run.bat
no warring BLIP2 plugin can be installed without any problem
Microsoft Edge
sysinfo-2024-05-16-10-15.json
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-latest-276-g29be1da7 Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 Launching Web UI with arguments: Total VRAM 12288 MB, total RAM 32706 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3060 : native Hint: your device supports --pin-shared-memory for potential speed improvements. Hint: your device supports --cuda-malloc for potential speed improvements. Hint: your device supports --cuda-stream for potential speed improvements. VAE dtype: torch.bfloat16 CUDA Stream Activated: False Using pytorch cross attention ControlNet preprocessor location: D:\Program Files\stable diffusion\webui_forge\webui\models\ControlNetPreprocessor 找不到llama_cpp模块 Loading weights [9dfab17adf] from D:\Program Files\stable diffusion\webui_forge\webui\models\Stable-diffusion\LEOSAM AIArt 兔狲插画 SDXL大模型_v2.safetensors 2024-05-16 18:14:20,302 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Startup time: 11.3s (prepare environment: 2.7s, import torch: 3.2s, import gradio: 0.7s, setup paths: 0.7s, other imports: 0.6s, load scripts: 1.8s, create ui: 0.6s, gradio launch: 0.8s). model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} To load target model SDXLClipModel Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 11246.99609375 [Memory Management] Model Memory (MB) = 2144.3546981811523 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 8078.641395568848 Moving model(s) has taken 0.55 seconds Model loaded in 6.9s (load weights from disk: 0.8s, forge instantiate config: 1.9s, forge load real models: 3.3s, forge finalize: 0.2s, calculate empty prompt: 0.8s).
No response
The text was updated successfully, but these errors were encountered:
You need to install a python package called "llama-cpp-python" for sd-webui-oldsix-prompt. Please read the readme in this repo for details: https://github.com/thisjam/sd-webui-oldsix-prompt
Sorry, something went wrong.
No branches or pull requests
Checklist
What happened?
when star:
找不到llama_cpp模块(Can't find the llama_cpp module)
Steps to reproduce the problem
run.bat
What should have happened?
no warring
BLIP2 plugin can be installed without any problem
What browsers do you use to access the UI ?
Microsoft Edge
Sysinfo
sysinfo-2024-05-16-10-15.json
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: