We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在mac m2系统上,升级了python库:CMAKE_ARGS="-DGGML_METAL=ON" pip install -U chatglm-cpp 运行glm4:
import chatglm_cpp pipeline = chatglm_cpp.Pipeline("../models/chatglm4-ggml.bin") pipeline.chat([chatglm_cpp.ChatMessage(role="user", content="你好")])
import chatglm_cpp
pipeline = chatglm_cpp.Pipeline("../models/chatglm4-ggml.bin") pipeline.chat([chatglm_cpp.ChatMessage(role="user", content="你好")])
报: GGML_ASSERT: /private/var/folders/hp/n4snp8jx0vs0dmq9165t74xr0000gn/T/pip-install-rrd53153/chatglm-cpp_777c14ecc59a4daba6aa92a07740ca19/third_party/ggml/src/ggml-metal.m:1453: false
The text was updated successfully, but these errors were encountered:
确认下 ChatGLM3-6B 是否能跑,如果可以,大概率是显存不够,试一下限制一下 max_length:
>>> import chatglm_cpp >>> pipeline = chatglm_cpp.Pipeline("../models/chatglm4-ggml.bin", max_length=2048) >>> pipeline.chat([chatglm_cpp.ChatMessage(role="user", content="你好")])
Sorry, something went wrong.
No branches or pull requests
在mac m2系统上,升级了python库:CMAKE_ARGS="-DGGML_METAL=ON" pip install -U chatglm-cpp
运行glm4:
报:
GGML_ASSERT: /private/var/folders/hp/n4snp8jx0vs0dmq9165t74xr0000gn/T/pip-install-rrd53153/chatglm-cpp_777c14ecc59a4daba6aa92a07740ca19/third_party/ggml/src/ggml-metal.m:1453: false
The text was updated successfully, but these errors were encountered: