You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Python 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from llama_cpp import Llama
>>> llm = Llama(model_path="/home/micraow/chatglm.cpp/chatglm2-ggml.bin")
llama.cpp: loading model from /home/micraow/chatglm.cpp/chatglm2-ggml.bin
error loading model: unknown (magic, version) combination: 6c6d6767, 00000002; is this really a GGML file?
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/micraow/aiwork/llama-cpp-python/llama_cpp/llama.py", line 304, in __init__
assert self.model is not None
^^^^^^^^^^^^^^^^^^^^^^
AssertionError
当我尝试用llama-cpp-python 调用chatglm2的q8_0的模型时遇到了问题:
然而用本项目调用则正常,模型应该没有问题,我使用的是llama-cpp-python的最新版本,正在尝试利用privateGPT搭建知识库时遇到了这个问题。不知道该如何解决?
The text was updated successfully, but these errors were encountered: