You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
我的电脑是MAC M2,在运行ComfyUI-KwaiKolorsWrapper插件时:
1、fp16 - 12 GB爆显存;
2、quant8 - 8-9 GB、quant4 - 4-5 GB时出现如下错误:
!!! Exception during processing!!! Torch not compiled with CUDA enabled
Traceback (most recent call last):
File "/Users/habhy/Sites/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/nodes.py", line 188, in loadmodel
text_encoder.quantize(4)
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/modeling_chatglm.py", line 852, in quantize
quantize(self.encoder, weight_bit_width)
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/quantization.py", line 157, in quantize
weight=layer.self_attention.query_key_value.weight.to(torch.cuda.current_device()),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/myenv/lib/python3.11/site-packages/torch/cuda/init.py", line 778, in current_device
_lazy_init()
File "/Users/habhy/Sites/ComfyUI/myenv/lib/python3.11/site-packages/torch/cuda/init.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
The text was updated successfully, but these errors were encountered:
我的电脑是MAC M2,在运行ComfyUI-KwaiKolorsWrapper插件时:
1、fp16 - 12 GB爆显存;
2、quant8 - 8-9 GB、quant4 - 4-5 GB时出现如下错误:
!!! Exception during processing!!! Torch not compiled with CUDA enabled
Traceback (most recent call last):
File "/Users/habhy/Sites/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/nodes.py", line 188, in loadmodel
text_encoder.quantize(4)
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/modeling_chatglm.py", line 852, in quantize
quantize(self.encoder, weight_bit_width)
File "/Users/habhy/Sites/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper/kolors/models/quantization.py", line 157, in quantize
weight=layer.self_attention.query_key_value.weight.to(torch.cuda.current_device()),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/habhy/Sites/ComfyUI/myenv/lib/python3.11/site-packages/torch/cuda/init.py", line 778, in current_device
_lazy_init()
File "/Users/habhy/Sites/ComfyUI/myenv/lib/python3.11/site-packages/torch/cuda/init.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
The text was updated successfully, but these errors were encountered: