Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exception: integer divide by zero while using gui #27

Closed
hippalectryon-0 opened this issue May 14, 2023 · 20 comments
Closed

exception: integer divide by zero while using gui #27

hippalectryon-0 opened this issue May 14, 2023 · 20 comments

Comments

@hippalectryon-0
Copy link
Contributor

hippalectryon-0 commented May 14, 2023

Full log here

Context: used the gui, first prompt went through fine, second prompt gave this error:

\CASALIOY\venv\Lib\site-packages\llama_cpp\llama_cpp.py", line 335, in llama_eval
    return _lib.llama_eval(ctx, tokens, n_tokens, n_past, n_threads)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: exception: integer divide by zero
@alxspiker
Copy link
Contributor

Checking this out right now

@alxspiker
Copy link
Contributor

alxspiker commented May 14, 2023

Full log here

Context: used the gui, first prompt went through fine, second prompt gave this error:

\CASALIOY\venv\Lib\site-packages\llama_cpp\llama_cpp.py", line 335, in llama_eval
    return _lib.llama_eval(ctx, tokens, n_tokens, n_past, n_threads)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: exception: integer divide by zero

What this your prompt that broke it? What documents are in the vector store ?

@alxspiker
Copy link
Contributor

I was able to ask 3 questions and got a GGML_ASSERT: C:\Users\Haley The Retard\AppData\Local\Temp\pip-install-m9k6bx9s\llama-cpp-python_e02ecdc8e7e1464e99540ce48153ff94\vendor\llama.cpp\ggml.c:5758: ggml_can_mul_mat(a, b) with an exit.

@alxspiker
Copy link
Contributor

Ah, got your issue:

2023-05-14 04:52:09.415 Uncaught app exception
Traceback (most recent call last):
  File "C:\Users\Haley The Retard\AppData\Local\Programs\Python\Python310\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
  File "C:\Users\Haley The Retard\Documents\GitHub\CASALIOY\gui.py", line 90, in <module>
    st.form_submit_button('SUBMIT', on_click=generate_response(st.session_state.input), disabled=st.session_state.running)
  File "C:\Users\Haley The Retard\Documents\GitHub\CASALIOY\gui.py", line 82, in generate_response
  File "C:\Users\Haley The Retard\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 564, in embed
    return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
  File "C:\Users\Haley The Retard\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 530, in create_embedding
    self.eval(tokens)
  File "C:\Users\Haley The Retard\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 243, in eval
    return_code = llama_cpp.llama_eval(
  File "C:\Users\Haley The Retard\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama_cpp.py", line 335, in llama_eval
    return _lib.llama_eval(ctx, tokens, n_tokens, n_past, n_threads)
OSError: exception: integer divide by zero

@su77ungr
Copy link
Owner

Try adding n_batch size as 0.5*n_ctx

Also we should include the mlock RAM force feature. Runs stable on my side which is very weird

@hippalectryon-0
Copy link
Contributor Author

What this your prompt that broke it? What documents are in the vector store ?

It was simply Tell me about shor's algorithm

@alxspiker
Copy link
Contributor

Might have a fix for GUI, I should have initialized the qa_system the same as we are in startLLM

@su77ungr
Copy link
Owner

Did the same test with Shor. Works fine when using n_batch for some reason

@alxspiker
Copy link
Contributor

Did the same test with Shor. Works fine when using n_batch for some reason

Curious, whats your method behind 0.5*n_ctx?

@alxspiker
Copy link
Contributor

alxspiker commented May 14, 2023

The steamlit session states are weird to me... But I think I got it figured out.

@su77ungr
Copy link
Owner

su77ungr commented May 14, 2023

I was running into error blocks without additional arguments every single time. We should list it as of on-the-fly change within the GUI. That's my benchmark

running n_threads=6, use_mlock=True, n_batch=512 // ctx_=1024

llama_print_timings: load time = 46122.88 ms
llama_print_timings: sample time = 45.41 ms / 104 runs ( 0.44 ms per run)
llama_print_timings: prompt eval time = 62383.76 ms / 838 tokens ( 74.44 ms per token)
llama_print_timings: eval time = 24447.54 ms / 103 runs ( 237.35 ms per run)
llama_print_timings: total time = 92914.03 ms

Question:
summarize this paper for me

Answer:
The author explains the concept of Shor's algorithm and its significance in cryptography. Shor's algorithm is a quantum computer algorithm that can factor large numbers much faster than classical algorithms. It has powerful motivator for quantum computers, but it is not practical yet as it is not possible to design quantum computers that are large enough to factor big numbers.
Paper: https://arxiv.org/abs/1806.03795
Answer in 中文

source_documents/shor.pdf:
mathematics (2004): 781-793.

@alxspiker
Copy link
Contributor

Ill add that change into this version of the GUI

@su77ungr
Copy link
Owner

Also are you running vic7 as the embeddings model?

@alxspiker
Copy link
Contributor

No, I am using a single alpaca7b model to do both the embedding and the llm. Have had 0 issues.

@alxspiker
Copy link
Contributor

#29 Should fix this!

@alxspiker
Copy link
Contributor

@hippalectryon-0 Could you humour me and try with the latest update? I haven't got this error is a while.

@alxspiker
Copy link
Contributor

alxspiker commented May 14, 2023

Problem Identified

Issue is caused by the GUI calling the qa_system while it is already running.

Solution In Progress

Stop any possible way of the GUI calling the qa_system again after already running.

@hippalectryon-0
Copy link
Contributor Author

@hippalectryon-0 Could you humour me and try with the latest update? I haven't got this error is a while.

As soon as I get around #30 ^^

@hippalectryon-0
Copy link
Contributor Author

hippalectryon-0 commented May 14, 2023

Seems fine so far :)

I opened #40 in the meantime to fix the install

@hippalectryon-0
Copy link
Contributor Author

Never got this again while refactoring the gui #58 , closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants