Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I can run Kolors on a quant4 setup, but loading the chatglm3-4bit.safetensors model separately gives an error! #10

Open
klossm opened this issue Jul 7, 2024 · 0 comments

Comments

@klossm
Copy link

klossm commented Jul 7, 2024

Error when loading chatglm3-4bit.safetensors

Traceback (most recent call last):
File "L:\stable-diffusion\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "L:\stable-diffusion\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "L:\stable-diffusion\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "L:\stable-diffusion\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-KwaiKolorsWrapper\nodes.py", line 122, in loadmodel
text_encoder.quantize(4)
File "L:\stable-diffusion\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-KwaiKolorsWrapper\kolors\models\modeling_chatglm.py", line 852, in quantize
quantize(self.encoder, weight_bit_width)
File "L:\stable-diffusion\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-KwaiKolorsWrapper\kolors\models\quantization.py", line 155, in quantize
layer.self_attention.query_key_value = QuantizedLinear(
File "L:\stable-diffusion\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI-KwaiKolorsWrapper\kolors\models\quantization.py", line 141, in init
self.weight = Parameter(self.weight.to(device), requires_grad=False)
File "L:\stable-diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1726, in setattr
self.register_parameter(name, value)
File "L:\stable-diffusion\venv\lib\site-packages\accelerate\big_modeling.py", line 105, in register_empty_parameter
module._parameters[name] = param_cls(module._parameters[name].to(device), **kwargs)
File "L:\stable-diffusion\venv\lib\site-packages\torch\nn\parameter.py", line 40, in new
return torch.Tensor._make_subclass(cls, data, requires_grad)
RuntimeError: Only Tensors of floating point and complex dtype can require gradients

@klossm klossm changed the title RuntimeError: Only Tensors of floating point and complex dtype can require gradients Error when loading chatglm3-4bit.safetensors Jul 8, 2024
@klossm klossm changed the title Error when loading chatglm3-4bit.safetensors I can run Kolors on a quant4 setup, but loading the chatglm3-4bit.safetensors model separately gives an error! Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant