Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hardcoded path in dll? User:LLAMA_ASSERT: E:\s\repos\llama.cpp\llama.cpp:1343: !!kv_self.ctx #67

Closed
radimbrixi opened this issue Jul 27, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@radimbrixi
Copy link

Steps to reptroduce:
Run the example project, choose 1
load llama-2-7b-guanaco-qlora.ggmlv3.q8_0.bin file
code breaks with positive exit code, please see output below:

Hardcoded path to cpp in dll
Please choose the version you want to test:
0. old version (for v0.3.0 or earlier version)

  1. new version (for versions after v0.4.0)

Your Choice: 1

================LLamaSharp Examples (New Version)==================

Please input a number to choose an example to run:
0: Run a chat session without stripping the role names.
1: Run a chat session with the role names strippped.
2: Interactive mode chat by using executor.
3: Instruct mode chat by using executor.
4: Stateless mode chat by using executor.
5: Load and save chat session.
6: Load and save state of model and executor.
7: Get embeddings from LLama model.
8: Quantize the model.

Your choice: 2
Please input your model path: G:\Temp4\llama-2-7B-Guanaco-QLoRA-GGML\llama-2-7b-guanaco-qlora.ggmlv3.q8_0.bin
llama.cpp: loading model from G:\Temp4\llama-2-7B-Guanaco-QLoRA-GGML\llama-2-7b-guanaco-qlora.ggmlv3.q8_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 256
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 7 (mostly Q8_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: model size = 7B
The executor has been enabled. In this example, the prompt is printed, the maximum tokens is set to 128 and the context size is 256. (an example for small scale usage)
Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.

User: Hello, Bob.
Bob: Hello. How may I help you today?
User: Please tell me the largest city in Europe.
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
User:LLAMA_ASSERT: E:\s\repos\llama.cpp\llama.cpp:1343: !!kv_self.ctx

C:\Users\Radim\Documents\LLamaSharp\LLama.Examples\bin\Debug\net6.0\LLama.Examples.exe (process 20504) exited with code -1073740791.
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
Press any key to close this window . . .

Please note, that I am new and I am not sure if the model v2 is supported, but hardcoded path in dll looks not ok... I did not find yet, how to recomplile the dlls...

@cedgestioneambiente
Copy link

same issue

@martindevans
Copy link
Member

I don't think the hatdcoded path is a problem, it's just the path to the file on the build server.

The !!kv_self.ctx error is something I saw a few times when working on the new loading system, it seemed to usually happen when using an incompatible DLL.

@martindevans
Copy link
Member

Is this fixed with 0.4.2?

@martindevans
Copy link
Member

I think this has been fixed for a while so I'll close it, please don't hesitate to re-open it if it's still a problem!

@martindevans martindevans added the bug Something isn't working label Nov 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants