Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error loading model #2135

Closed
1 task done
milobestcat opened this issue May 17, 2023 · 6 comments
Closed
1 task done

error loading model #2135

milobestcat opened this issue May 17, 2023 · 6 comments
Labels
bug Something isn't working stale

Comments

@milobestcat
Copy link

Describe the bug

Hi I tried to follow the manual installation steps but I couldn't get the server run. After some online research, i thought the problem might be due to the pytorch installation, so I tried this to install it: pip install -U --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html.
Now the server is able to run as I tested it on some simple LLM models.
Then I tested it on ggml 13b and here is the error:
llama.cpp: loading model from models/eachadea_ggml-vicuna-13b-1.1/ggml-old-vic13b-q4_0.bin
llama_model_load_internal: format = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 4 (mostly Q4_1, some F16)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
error loading model: this format is no longer supported (see ggerganov/llama.cpp#1305)
llama_init_from_file: failed to load model
Exception ignored in: <function LlamaCppModel.del at 0x13c546200>
Traceback (most recent call last):
File "/Users/yingxiao.kong/text-generation-webui/modules/llamacpp_model.py", line 23, in del
self.model.del()
AttributeError: 'LlamaCppModel' object has no attribute 'model'

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

To reproduce it, follow all the manual installation steps in the instruction but only replace the step for pytorch installation with: pip install -U --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html

Screenshot

No response

Logs

llama.cpp: loading model from models/eachadea_ggml-vicuna-13b-1.1/ggml-old-vic13b-q4_0.bin
llama_model_load_internal: format     = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 4 (mostly Q4_1, some F16)
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 13B
error loading model: this format is no longer supported (see https://github.com/ggerganov/llama.cpp/pull/1305)
llama_init_from_file: failed to load model
Exception ignored in: <function LlamaCppModel.__del__ at 0x13c546200>
Traceback (most recent call last):
  File "/Users/yingxiao.kong/text-generation-webui/modules/llamacpp_model.py", line 23, in __del__
    self.model.__del__()
AttributeError: 'LlamaCppModel' object has no attribute 'model'

System Info

I'm using a mac M1 32GB pro
macOS: 13.3.1
@milobestcat milobestcat added the bug Something isn't working label May 17, 2023
@cnodon
Copy link

cnodon commented May 18, 2023

same issue here

@strnad
Copy link
Contributor

strnad commented May 18, 2023

you have old version of ggml model, download new one

error loading model: this format is no longer supported (see ggerganov/llama.cpp#1305)

@valdesguefa
Copy link

@strnad which version can we download ?. i a tire to download model and then have the error.

@gandolfi974
Copy link

same problem with "WizardLM-13B-Uncensored-GGML"

@YingxiaoKong
Copy link

@gandolfi974 @valdesguefa try to download the stable-vicuna-13B.ggml.q5_1.bin

@github-actions github-actions bot added the stale label Jul 22, 2023
@github-actions
Copy link

This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

6 participants