-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Illegal instruction (core dumped) #203
Comments
How did you end up with this? Provide more information. |
I just follow the instructions in README, all the steps when well, I run the code on an old xeon CPU (2012), will it be something missing on the CPU? I run 8 cores + 16GB ram I run in docker with image python:3 |
Yeah, same here. Followed the instructions with the sample. EDIT: After waiting for a bit, it worked but was very slow and did get some kind of an error. Here are the results. llama.cpp: loading model from D:\PrivateGPT\models\ggml-model-q4_0.bin
llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this
llama_model_load_internal: format = 'ggml' (old version with low tokenizer quality and no mmap support)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 1000
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 4113748.20 KB
llama_model_load_internal: mem required = 5809.33 MB (+ 2052.00 MB per state)
...................................................................................................
.
llama_init_from_file: kv self size = 1000.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
Using embedded DuckDB with persistence: data will be stored in: db
gptj_model_load: loading model from 'D:\PrivateGPT\models\ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
gptj_model_load: ggml ctx size = 4505.45 MB
gptj_model_load: memory_size = 896.00 MB, n_mem = 57344
gptj_model_load: ................................... done
gptj_model_load: model size = 3609.38 MB / num tensors = 285
Enter a query: What did the president say about russia?
llama_print_timings: load time = 504.66 ms
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per run)
llama_print_timings: prompt eval time = 646.91 ms / 10 tokens ( 64.69 ms per token)
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per run)
llama_print_timings: total time = 650.76 ms
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
gpt_tokenize: unknown token 'Ö'
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
gpt_tokenize: unknown token 'Ö'
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
gpt_tokenize: unknown token 'Ö'
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
gpt_tokenize: unknown token '£'
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
gpt_tokenize: unknown token '¥'
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
gpt_tokenize: unknown token 'Ö'
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
gpt_tokenize: unknown token 'Ö'
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
gpt_tokenize: unknown token 'Ö'
gpt_tokenize: unknown token 'Γ'
gpt_tokenize: unknown token 'Ç'
We are choking off Russia’s access to technology that will sap its economic strength and weaken its military for years to come.
Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more.
> source_documents\state_of_the_union.txt:
While it shouldn’t have taken something so terrible for people around the world to see what’s at stake now everyone sees it clearly.
We see the unity among leaders of nations and a more unified Europe a more unified West. And we see unity among the people who are gathering in cities in large crowds around the world even in Russia to demonstrate their support for Ukraine.
> source_documents\state_of_the_union.txt:
Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people.
Throughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos.
They keep moving.
And the costs and the threats to America and the world keep rising.
That’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2.
> source_documents\state_of_the_union.txt:
We prepared extensively and carefully.
We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin.
I spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression.
We countered Russia’s lies with truth.
And now that he has acted the free world is holding him accountable.
Enter a query:``` |
I am getting the same error.
Then Looking a dmesg I saw the following |
The iasue is not about GGML, since the most recent release on pip of llama-cpp-python is not the most upstream one. If you can run it wirh valgrind (which will significantly reduce the speed) we can help better. |
I did try running the valgrind, this is the latest code.. any pointer will help, trying to run on a ubuntu vm with python3.10 valgrind python3.10 privateGPT.py |
Oh well it feels like it's about memory, not sure about that though. Maybe you should try to compile llama-cpp-python and langchain on your local machine. Check my repos for the Jupyter Notebook that contains the code that can guide you... |
Indeed Gentlemen, after having unsuccessfully tried to compile this project in Windows, I have switched to WSL and am now getting a "illegal Instruction" error - an HW error for sure, however its quite a new processor. Intel Celeron 1.5GHz CPU J3455 |
I also get `Illegal instruction" on several machines. I don't think it's a hardware resource issue, as it fails the same on a 24-core Intel Xeon E5-2620 with 32GB RAM. I'm using Python 3.11.3 in pyenv.
|
@jonarmani I have a similar cpu like yours |
I ran into this on an older machine. But when I tried it on a newer CPU, it worked successfully. I think the difference is if the CPU supports the AVX2 instruction. To quickly check for this:
|
Your CPU needs to support avx2 instructions, otherwise executing privateGPT script will give you the error: "Illegal instruction (core dumped)". Tested on two of my computers where one has an AMD CPU with avx instruction set (Illegal instruction (core dumped)), and on another INTEL CPU with avx and avx2 instruction set. (it worked). |
Brilliant, many thanks, however this project does not rely on TensorFlow ->
would be great knowing what other libraries could possible be causing this
error and which should we downgrade.
…On Fri, May 19, 2023 at 10:52 AM ubuntinux ***@***.***> wrote:
Your CPU needs to support avx2 instructions, otherwise executing
privateGPT script will give you the error: "Illegal instruction (core
dumped)". Tested on two of my computers where one has an AMD CPU with avx
instruction set (Illegal instruction (core dumped)), and on another INTEL
CPU with avx and avx2 instruction set. (it worked).
Solutions:
https://tech.amikelive.com/node-887/how-to-resolve-error-illegal-instruction-core-dumped-when-running-import-tensorflow-in-a-python-program/
—
Reply to this email directly, view it on GitHub
<#203 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABMBTKEGLVOUJDLJOGCJOXLXG47GXANCNFSM6AAAAAAYDMJHC4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Following this guide nomic-ai/pygpt4all#71 |
I've tried with all compiler options set to |
Did you try updating pygptj as well? |
Got the same error on a M1 mac ~/git/privateGPT main ✗ 4h41m ✖ ◒ ⍉ PyGPT-J
|
but again, I've modified its I'm using pyenv, which is new to me, so maybe I'm doing something weird there. But I'm in the activated environment the whole time, not issuing any
|
Oh it's definately that. I ran this through gdb: Thread 1 "python" received signal SIGILL, Illegal instruction. |
Tried both the options but still getting the error. I am running privateGPT in a Manjaro VirtualBox VM, with 4 CPU cores assigned and 10GB of RAM. The Host CPU is Intel Core vPRO i9 |
I am also Observing : Illegal instruction (core dumped). Note: This is in RHEL container.. +1 |
Hi all, I think I have found a solution - more of a workaround. The root cause seems to be Hyper-V is somehow enabled (albeit surreptitiously) on my Windows 10 Host, due to which the VirtualBox Guest (Manjaro Linux) doesn't seem to have access to CPU Instructions like AVX2. So, I migrated to WSL2 Ubuntu 22.04 and from there, I was able to build this app out of the box. Suits my needs, for the time being. PS: Not sure why Hyper-V is still "active" despite turning it off explicitly. Also, I was not aware that VirtualBox 7.0.x indeed works with Hyper-V on. With 6.x versions, I vividly remember I was not able to guests with Hyper-V on |
Out of curiosity, Hyper-V VM users don't seem to be having such an issue, why are you using vbox? WSL2 also utilises Hyper-V afaik, so I assume you'd have no issues on it. |
I need to learn the nitty-gritties myself but from whatever I have understood, Microsoft's Hyper-V is a "Type 1" Hypervisor that has all / max control of the underlying hardware. All the guest OS-es don't have as much access / control of the underlying hardware and this could be the reason for advanced CPU Instruction Sets like AVX2 being unavailable to guest OS-es of other apps like VBox. That doesn't seem to be the case for direct guest OS-es of Hyper-V itself. Before installing VBox, I had ensured that I had disabled Hyper-V completely thru the "Turn Windows features on or off" yet it looks like Hyper-V is/was still running "surreptitiously" - the Turtle icon at the bottom-right of the VBox window indicated that. I abandoned my fight to get rid of that Turtle icon and migrated to WSL2 instead. Btw, I had to use VBox due to my professional work |
alright, then you can try this: https://stackoverflow.com/questions/65780506/how-to-enable-avx-avx2-in-virtualbox-6-1-16-with-ubuntu-20-04-64bit or disable the windows antivirus completely, as far as i know that's what causes the instruction blockage. |
The text was updated successfully, but these errors were encountered: