-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Open
Description
Describe the bug
I am facing a weird bug when using the LLavA 1.5 model from llamafile.
Everything runs smoothly until the end, then eventually it will take a very long time generating "nobody" multiple time for no reason.
Reproduce
- install the llamafile model : LLavA 1.5
- open it to host the local server, you can follow the instructions here
- running this script:
from interpreter import interpreter
interpreter.offline = True
interpreter.llm.model = "openai/LLaMA_CPP"
interpreter.llm.api_key = "fake_key"
interpreter.llm.api_base = "http://localhost:8080/v1"
interpreter.auto_run = True
interpreter.chat("use your python coding skills to know what my current operating system, write it in code block")
Expected behavior
It is supposed to continue running normally or terminate without having the same output repeated many times.
Screenshots
No response
Open Interpreter version
0.4.3
Python version
3.11.2
Operating System name and version
Debian GNU/Linux 12 (bookworm)
Additional context
I have tried another solution using this line of code:
from interpreter import interpreter
interpreter.offline = False
interpreter.llm.model = "openai/LLaMA_CPP"
interpreter.llm.api_key = "fake_key"
interpreter.llm.api_base = "http://localhost:8080/v1"
interpreter.auto_run = True
interpreter.code_output_template = "Code output: {content}\nWhat does this output mean / what's next (if anything, or are we done)?"
interpreter.chat("use your python coding skills to know what my current operating system, write it in code block")
interpreter.loop = True
Metadata
Metadata
Assignees
Labels
No labels