-
Notifications
You must be signed in to change notification settings - Fork 7.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
i get the exit code -1073741819 (0xC0000005) while trying to run the basic gpt4all script #2128
Comments
I'm going to need more information to debug this. If you download this file and install it with You should also install procdump. You'll need to Then you should be able to run If your script crashes, procdump should generate a minidump with a .DMP extension in your current directory. If you ZIP that up and send it to me via e-mail (jared@nomic.ai) or Discord ( |
i tried to run
P.S. (im running procdump using .\procdump.exe) |
The problem is that this is the OP's copy of orca-mini-3b-gguf2-q4_0.gguf. It's way too small (22 MiB instead of 1.8 GiB) and is crashing llama.cpp. I can reproduce the crash on Linux as well. The bug is twofold:
|
As I discovered in #2154, this class of problem is stupidly easy to reproduce with an empty cache and gpt4all 2.2.1.post1:
The segfault will still be possible if you pass in a bad model, but writing an incomplete model under the final name should be robustly avoided now. |
The underlying segfault is most likely fixed by ggerganov/llama.cpp#6885. We will update our llama.cpp dependency at some point, and this issue should be fixed at all levels. |
Example Code
and this is what i get
Process finished with exit code -1073741819 (0xC0000005)
Your Environment
The text was updated successfully, but these errors were encountered: