Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU only fails because of missing NVIDIA driver #78

Closed
VRciF opened this issue Sep 18, 2022 · 3 comments
Closed

CPU only fails because of missing NVIDIA driver #78

VRciF opened this issue Sep 18, 2022 · 3 comments
Labels
bug Something isn't working

Comments

@VRciF
Copy link

VRciF commented Sep 18, 2022

Describe the bug

Right after txt2img finishes I receive the error

Total progress: 100%|██████████| 20/20 [07:53<00:00, 23.66s/it]
webui-docker-automatic1111-cpu-1  | Traceback (most recent call last):
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/gradio/routes.py", line 273, in run_predict
webui-docker-automatic1111-cpu-1  |     output = await app.blocks.process_api(
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 753, in process_api
webui-docker-automatic1111-cpu-1  |     result = await self.call_function(fn_index, inputs, iterator)
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 630, in call_function
webui-docker-automatic1111-cpu-1  |     prediction = await anyio.to_thread.run_sync(
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
webui-docker-automatic1111-cpu-1  |     return await get_asynclib().run_sync_in_worker_thread(
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
webui-docker-automatic1111-cpu-1  |     return await future
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
webui-docker-automatic1111-cpu-1  |     result = context.run(func, *args)
webui-docker-automatic1111-cpu-1  |   File "/stable-diffusion-webui/modules/ui.py", line 139, in f
webui-docker-automatic1111-cpu-1  |     mem_stats = {k: -(v//-(1024*1024)) for k,v in shared.mem_mon.stop().items()}
webui-docker-automatic1111-cpu-1  |   File "/stable-diffusion-webui/modules/memmon.py", line 77, in stop
webui-docker-automatic1111-cpu-1  |     return self.read()
webui-docker-automatic1111-cpu-1  |   File "/stable-diffusion-webui/modules/memmon.py", line 65, in read
webui-docker-automatic1111-cpu-1  |     free, total = torch.cuda.mem_get_info()
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/torch/cuda/memory.py", line 583, in mem_get_info
webui-docker-automatic1111-cpu-1  |     device = torch.cuda.current_device()
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py", line 481, in current_device
webui-docker-automatic1111-cpu-1  |     _lazy_init()
webui-docker-automatic1111-cpu-1  |   File "/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py", line 216, in _lazy_init
webui-docker-automatic1111-cpu-1  |     torch._C._cuda_init()
webui-docker-automatic1111-cpu-1  | RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Which UI

auto-cpu

Steps to Reproduce

  1. From a fresh master git clone a few hours ago, I did docker compose --profile download up --build
  2. docker compose --profile auto-cpu up --build
  3. After this finishes, run the GUI at http://localhost:7860/ and without any modifications to settings use cat cartoon handrawing
  4. A few images are generating during the processing which are shown in the WebUI, but at the very end (100%) I get above error.

Hardware / Software:

  • OS: Ubuntu 22.04.1 LTS
  • RAM: 16 GB
  • GPU: NVIDIA Corporation GM107 [GeForce GTX 750 Ti]
  • VRAM: 2GB (thus I'm using auto-cpu instead of gpu)
  • Docker Version 20.10.18, Docker compose version 2.4.1
  • Release version: master (6a66ff6)

Additional context

The WebUI only shows the message Error and in browser console I have Uncaught (in promise) API Error .
The final image is successfully saved to output directory.

@AbdBarho Many thanks for your efforts on this docker setup! You're doing an awesome job

@AbdBarho
Copy link
Owner

@VRciF Thanks for the detailed bug report! you have no idea how many people just completely ignore the template.

I have reached out to the maintainer behind the vram monitor feature, I have the impression they are a competent developer so this regression should be hopefully dealt with soon.

In the mean time, you can try older commits from master.

AbdBarho added a commit that referenced this issue Sep 18, 2022
@AbdBarho
Copy link
Owner

@VRciF this should be fixed now, can you try again from latest master?

@AbdBarho AbdBarho added the awaiting-response Waiting for the issuer to respond label Sep 18, 2022
@VRciF
Copy link
Author

VRciF commented Sep 18, 2022

I did git pull followed by docker compose --profile auto-cpu up --build resulting in a newly created container

 ⠿ Container webui-docker-automatic1111-cpu-1  Recreated                                                                                                                                                      0.1s
Attaching to webui-docker-automatic1111-cpu-1

and it is working well. I get no errors and the web ui works fine. Many, many thanks for your fast response!

@VRciF VRciF closed this as completed Sep 18, 2022
@AbdBarho AbdBarho removed the awaiting-response Waiting for the issuer to respond label Sep 18, 2022
cloudaxes pushed a commit to cloudaxes/stable-diffusion-webui-docker that referenced this issue Sep 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants