Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA Setup failed despite GPU being available #39

Open
whmc76 opened this issue May 23, 2024 · 1 comment
Open

CUDA Setup failed despite GPU being available #39

whmc76 opened this issue May 23, 2024 · 1 comment

Comments

@whmc76
Copy link

whmc76 commented May 23, 2024

got prompt
[rgthree] Using rgthree's optimized recursive execution.
[rgthree] First run patching recursive_output_delete_if_changed and recursive_will_execute.
[rgthree] Note: If execution seems broken due to forward ComfyUI changes, you can disable the optimization from rgthree settings in ComfyUI.
The config attributes {'decay': 0.9999, 'inv_gamma': 1.0, 'min_decay': 0.0, 'optimization_step': 37000, 'power': 0.6666666666666666, 'update_after_step': 0, 'use_ema_warmup': False} were passed to UNet2DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Some weights of the model checkpoint were not used when initializing UNet2DConditionModel:
['add_embedding.linear_1.bias, add_embedding.linear_1.weight, add_embedding.linear_2.bias, add_embedding.linear_2.weight']
Loading pipeline components...: 0%| | 0/8 [00:00<?, ?it/s]False

===================================BUG REPORT===================================
E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\bitsandbytes\cuda_setup\main.py:167: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg)

CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
DEBUG: Possible options found for libcudart.so: set()
CUDA SETUP: PyTorch settings found: CUDA_VERSION=121, Highest Compute Capability: 8.9.
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
CUDA SETUP: Loading binary E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
CUDA SETUP: Solution 2a): Download CUDA install script: wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/cuda_install.sh
CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
Loading pipeline components...: 0%| | 0/8 [00:00<?, ?it/s]
!!! Exception during processing!!!
CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

Traceback (most recent call last):
File "E:\IMAGE\ComfyUI_master\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI_master\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI_master\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI_master\ComfyUI\custom_nodes\ComfyUI-IDM-VTON\src\nodes\pipeline_loader.py", line 96, in load_pipeline
pipe = TryonPipeline.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 118, in inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 812, in from_pretrained
maybe_raise_or_warn(
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\diffusers\pipelines\pipeline_loading_utils.py", line 254, in maybe_raise_or_warn
unwrapped_sub_model = unwrap_model(sub_model)
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\diffusers\pipelines\pipeline_loading_utils.py", line 229, in unwrap_model
from peft import PeftModel
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\peft_init
.py", line 22, in
from .auto import (
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\peft\auto.py", line 31, in
from .config import PeftConfig
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\peft\config.py", line 23, in
from .utils import CONFIG_NAME, PeftType, TaskType
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\peft\utils_init
.py", line 21, in
from .loftq_utils import replace_lora_weights_loftq
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\peft\utils\loftq_utils.py", line 35, in
import bitsandbytes as bnb
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\bitsandbytes_init.py", line 6, in
from . import cuda_setup, utils, research
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\bitsandbytes\research_init_.py", line 1, in
from . import nn
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\bitsandbytes\research\nn_init_.py", line 1, in
from .modules import LinearFP8Mixed, LinearFP8Global
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in
from bitsandbytes.optim import GlobalOptimManager
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\bitsandbytes\optim_init_.py", line 6, in
from bitsandbytes.cextension import COMPILED_WITH_CUDA
File "E:\IMAGE\ComfyUI_master\python_embeded\Lib\site-packages\bitsandbytes\cextension.py", line 20, in
raise RuntimeError('''
RuntimeError:
CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

[profiler] #11 PipelineLoader: 23.8465 seconds, total 23.8465 seconds
[profiler] #10 IDM-VTON: 0.001 seconds, total 23.8474 seconds(#11 #14 #13 #16 #17)
[profiler] #18 SaveImage: 0.0006 seconds, total 23.848 seconds(#10)
Prompt executed in 23.85 seconds

@Poukpalaova
Copy link

I got same error. It doesn't give me this error when on BF16 but got out of memory error try to request 6mb when i have 24gb free lol.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants