-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to fulfill prompt error #25
Comments
Thanks for the report, do you see anything in the logs |
That would make a lot of sense. Tried it again, and after the error, Ollama starts and is available at localhost:11434 |
Same for me. Getting the same error. Ollama is also running. But i cant find the service.log file. Running on windows. |
@BruceMacD Dont think |
same for me |
Hey everyone, if you're experiencing this please update to the latest chatd version. I believe this issue should be fixed, please let me know if that is not the case. |
@BruceMacD Hey downloaded the latest release, that works. However, facing other issues
I had to put the app dir in the root of the package altogether for it to run.
After this even generic questions/prompts fail. Before loading file, generic questions/prompts do get executed. |
Thanks for the report @scsmash3r, this one has been elusive for me. Are you using Ollama or just chatd? |
I am running this package https://github.com/BruceMacD/chatd/releases/download/v1.1.0/chatd-windows-x64.zip Last time I was trying to debug what is going on, I've spotted a running process in my Task Manager, called |
Thanks was finally able to reproduce this. Seems like a problem with how I was building the Windows version. Just uploaded a new pre-release that may fix the issue: If anyone gets a chance to try it let me know. |
Now it seems to be working without that issue, but when I exit |
AFAIK, you're describing ollama's designed behaviour. The ollama service runs constantly in the background, and the ollama client persists in RAM for 5 minutes. Both processes are spawned from same ollama.exe file. |
This is weird then, because the process stays in memory for longer than 5 minutes. @BruceMacD In any case, I was unable to reproduce an error, mentioned in the first message of this ticket, so probably it can be closed. |
The service stays running but it unloads the model from RAM after 5 minutes. You should be able to monitor the memory usage drop as it unloads the model. |
Ah, good catch. You're both correct. Ollama normally stay running in the background, but in the case of chatd I try to clean up the process if chatd started it, so that is a Windows bug in this case. Opening a new issue for that one. |
I am getting a Error: Failed to fulfill prompt message when opening the app. It's an immediate error, making it seem that it isn't initializing.
What causes this error so that I may provide more context?
The text was updated successfully, but these errors were encountered: