Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

internlm-chat-7b can convert to gguf but can not run #3551

Closed
lizhiling12345 opened this issue Oct 9, 2023 · 1 comment
Closed

internlm-chat-7b can convert to gguf but can not run #3551

lizhiling12345 opened this issue Oct 9, 2023 · 1 comment
Labels

Comments

@lizhiling12345
Copy link

internlm-chat-7b is similar to llama,difference is self_attn.k_proj.bias,self_attn.o_proj.bias,self_attn.q_proj.bias
can convert to gguf,but can not run,error occurred:
error loading model: done_getting_tensors: wrong number of tensors; expected 419, got 291
is there a way to solve it?
image

Copy link
Contributor

github-actions bot commented Apr 4, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant