Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors on launching server #503

Closed
AAbdulvagab opened this issue Mar 17, 2024 · 5 comments
Closed

Errors on launching server #503

AAbdulvagab opened this issue Mar 17, 2024 · 5 comments

Comments

@AAbdulvagab
Copy link

Get errors on launching ComfyUI. Could you please say how to solve it?

2024-03-17 11:18:42,192 INFO Total VRAM 3896 MB, total RAM 32051 MB
2024-03-17 11:18:42,192 INFO Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
2024-03-17 11:18:42,192 INFO Set vram state to: LOW_VRAM
2024-03-17 11:18:42,192 INFO Device: cuda:0 NVIDIA GeForce GTX 1650 SUPER : cudaMallocAsync
2024-03-17 11:18:42,192 INFO VAE dtype: torch.float32
2024-03-17 11:18:42,212 INFO Using pytorch cross attention
2024-03-17 11:18:42,852 INFO
2024-03-17 11:18:42,853 ERROR Traceback (most recent call last):
File "/home/av/.local/share/krita/ai_diffusion/server/ComfyUI/main.py", line 76, in
import execution
File "/home/av/.local/share/krita/ai_diffusion/server/ComfyUI/execution.py", line 11, in
import nodes
File "/home/av/.local/share/krita/ai_diffusion/server/ComfyUI/nodes.py", line 21, in
import comfy.samplers
File "/home/av/.local/share/krita/ai_diffusion/server/ComfyUI/comfy/samplers.py", line 1, in
from .k_diffusion import sampling as k_diffusion_sampling
File "/home/av/.local/share/krita/ai_diffusion/server/ComfyUI/comfy/k_diffusion/sampling.py", line 3, in
from scipy import integrate
File "", line 1229, in _handle_fromlist
File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/scipy/init.py", line 134, in getattr
return _importlib.import_module(f'scipy.{name}')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/scipy/integrate/init.py", line 94, in
from ._quadrature import *
File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/scipy/integrate/_quadrature.py", line 9, in
from scipy.special import roots_legendre
File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/scipy/special/init.py", line 781, in
from ._support_alternative_backends import (
File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/scipy/special/_support_alternative_backends.py", line 6, in
from scipy._lib._array_api import array_namespace, is_cupy, is_torch, is_numpy
File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/scipy/_lib/array_api.py", line 15, in
from numpy.testing import assert

File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/numpy/testing/init.py", line 11, in
from ._private.utils import *
File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/numpy/testing/_private/utils.py", line 1253, in
_SUPPORTS_SVE = check_support_sve()
^^^^^^^^^^^^^^^^^^^
File "/home/av/.local/share/krita/ai_diffusion/server/venv/lib/python3.11/site-packages/numpy/testing/_private/utils.py", line 1247, in check_support_sve
output = subprocess.run(cmd, capture_output=True, text=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 550, in run
stdout, stderr = process.communicate(input, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1209, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 2151, in _communicate
stdout = self._translate_newlines(stdout,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1086, in _translate_newlines
data = data.decode(encoding, errors)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 0: invalid start byte

@Acly
Copy link
Owner

Acly commented Mar 17, 2024

Hm, same issues as #404, but I don't know why it happens. The theory is that it was related to Krita flatpak. But you are not using flatpak?

@AAbdulvagab
Copy link
Author

AAbdulvagab commented Mar 17, 2024

No, I'm not using flatpak. I have installed krita 5.2.2-7 from extra repository. Using Manjaro Linux (6.6.19-1)

@anas1412
Copy link

Same issue using windows 10
I have GTX1650 as well

@Acly
Copy link
Owner

Acly commented Apr 19, 2024

It's unlikely that it's exactly the same issue on Windows, the initial log is a Linux-specific thing

The only workaround I know is to run the server manually from console. It's a very strange issue and I have not been able to reproduce it on Ubuntu or Fedora.

@opako666
Copy link

Hi

I have problem with launch local managed server on AI plugin.

INFO OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\opako\AppData\Roaming\krita\pykrita\ComfyUI\python\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
2024-04-25 13:22:57,200 ERROR

Is there any solution?

Thanx
Configure Krita

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants