Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

发生Error occurred when executing DownloadAndLoadChatGLM3:Torch not compiled with CUDA enabled错误,求解决 #32

Open
BannyLon opened this issue Jul 24, 2024 · 2 comments

Comments

@BannyLon
Copy link

我的电脑是MAC M2,在运行ComfyUI-KwaiKolorsWrapper插件时:
1、fp16 - 12 GB爆显存;
2、quant8 - 8-9 GB、quant4 - 4-5 GB时出现如下错误:
1721827480261
!!! Exception during processing!!! Torch not compiled with CUDA enabled
Traceback (most recent call last):
File "/Users/habhy/Sites/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/nodes.py", line 188, in loadmodel
text_encoder.quantize(4)
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/modeling_chatglm.py", line 852, in quantize
quantize(self.encoder, weight_bit_width)
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/quantization.py", line 157, in quantize
weight=layer.self_attention.query_key_value.weight.to(torch.cuda.current_device()),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/myenv/lib/python3.11/site-packages/torch/cuda/init.py", line 778, in current_device
_lazy_init()
File "/Users/habhy/Sites/ComfyUI/myenv/lib/python3.11/site-packages/torch/cuda/init.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

@BannyLon
Copy link
Author

fp16 可以使用了,只是出一张1024的图需要6分20秒,使用quant8 - 8-9 GB、quant4 - 4-5 GB还是出现AssertionError: Torch not compiled with CUDA enabled报错。

另外想询问下这个插件是不是不能用最新的kolors的ipadapter啊?????

@foggyghost0
Copy link

I also have the same error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants