You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for this nice package. LLama.cpp made a new breaking change in their quantization methods recently (PR ref).
Would it be possible to get an update for the llama-node package to be able to use the ggml v3 models? Actually the new ggml models that come out are all using this format
The text was updated successfully, but these errors were encountered:
If I change in llama-cpp/index.d.ts line 137 static load(params: Partial<LoadModel> by static load(params: Partial<ModelLoad> it works. And I confirm that the compiled code can run ggml v3 models: nice 👍
Hi, thanks for this nice package. LLama.cpp made a new breaking change in their quantization methods recently (PR ref).
Would it be possible to get an update for the llama-node package to be able to use the ggml v3 models? Actually the new ggml models that come out are all using this format
The text was updated successfully, but these errors were encountered: