Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to fulfill prompt error #25

Closed
betterbrand opened this issue Jan 10, 2024 · 16 comments
Closed

Failed to fulfill prompt error #25

betterbrand opened this issue Jan 10, 2024 · 16 comments
Assignees
Labels
bug Something isn't working

Comments

@betterbrand
Copy link

I am getting a Error: Failed to fulfill prompt message when opening the app. It's an immediate error, making it seem that it isn't initializing.

What causes this error so that I may provide more context?

@BruceMacD BruceMacD added the bug Something isn't working label Jan 10, 2024
@BruceMacD
Copy link
Owner

Thanks for the report, do you see anything in the logs ~/.chatd/service.log? Also, do you run Ollama separately or the one bundled with the app? I'm wondering if its the new Ollama version.

@betterbrand
Copy link
Author

That would make a lot of sense. Tried it again, and after the error, Ollama starts and is available at localhost:11434

service.log

@arjunmenon
Copy link

Same for me. Getting the same error. Ollama is also running. But i cant find the service.log file. Running on windows.

@arjunmenon
Copy link

@BruceMacD Dont think Xenova/all-MiniLM-L6-v2 is available in ollama. Is this the issue?

@BruceMacD BruceMacD self-assigned this Feb 16, 2024
@leocalle-swag
Copy link

same for me

@BruceMacD
Copy link
Owner

Hey everyone, if you're experiencing this please update to the latest chatd version. I believe this issue should be fixed, please let me know if that is not the case.

@arjunmenon
Copy link

@BruceMacD Hey downloaded the latest release, that works. However, facing other issues

  1. File path is not correct
function processDocument(filePath, event) {
  const worker = new Worker('./src/service/worker.js');

I had to put the app dir in the root of the package altogether for it to run.

  1. Model does not run properly on custom files
worker received: C:\Users\iamjk\Downloads\exam.pdf
Warning: TT: undefined function: 32
TypeError: fetch failed
    at Object.fetch (node:internal/deps/undici/undici:14152:11) {
  cause: HeadersTimeoutError: Headers Timeout Error
      at Timeout.onParserTimeout [as _onTimeout] (node:internal/deps/undici/undici:9185:32)
      at listOnTimeout (node:internal/timers:571:11)
      at process.processTimers (node:internal/timers:512:7) {
    code: 'UND_ERR_HEADERS_TIMEOUT'
  }
}

After this even generic questions/prompts fail. Before loading file, generic questions/prompts do get executed.

@BruceMacD BruceMacD reopened this Feb 26, 2024
@scsmash3r
Copy link

image
Same issue, Win 11 23H2, unable to start the program.
There is only app.log with 0 bytes in .chatd folder.

@BruceMacD
Copy link
Owner

Thanks for the report @scsmash3r, this one has been elusive for me. Are you using Ollama or just chatd?

@scsmash3r
Copy link

Thanks for the report @scsmash3r, this one has been elusive for me. Are you using Ollama or just chatd?

I am running this package https://github.com/BruceMacD/chatd/releases/download/v1.1.0/chatd-windows-x64.zip
There is an ollama.exe file at ./resources/app/src/service/ollama/runners/ dir. And I think that "Failed to fullfill the prompt" error is encountered when ollama.exe wasn't properly stopped as a process.

Last time I was trying to debug what is going on, I've spotted a running process in my Task Manager, called ollama.exe - so I've manually killed it, and then chatd.exe was able to run without this issue. I think there are cases when you are stopping your chatd.exe process, but other undercover processes like ollama.exe are still up and running, so by running chatd.exe again this error is raised.

@BruceMacD
Copy link
Owner

Thanks was finally able to reproduce this. Seems like a problem with how I was building the Windows version. Just uploaded a new pre-release that may fix the issue:
https://github.com/BruceMacD/chatd/releases/tag/v1.1.1

If anyone gets a chance to try it let me know.

@scsmash3r
Copy link

Thanks was finally able to reproduce this. Seems like a problem with how I was building the Windows version. Just uploaded a new pre-release that may fix the issue: https://github.com/BruceMacD/chatd/releases/tag/v1.1.1

If anyone gets a chance to try it let me know.

Now it seems to be working without that issue, but when I exit chatd.exe, the process ollama.exe stays loaded into memory.
image

@fezzzza
Copy link

fezzzza commented Mar 6, 2024

Now it seems to be working without that issue, but when I exit chatd.exe, the process ollama.exe stays loaded into memory. !

AFAIK, you're describing ollama's designed behaviour. The ollama service runs constantly in the background, and the ollama client persists in RAM for 5 minutes. Both processes are spawned from same ollama.exe file.

@scsmash3r
Copy link

Now it seems to be working without that issue, but when I exit chatd.exe, the process ollama.exe stays loaded into memory. !

AFAIK, you're describing ollama's designed behaviour. The ollama service runs constantly in the background, and the ollama client persists in RAM for 5 minutes. Both processes are spawned from same ollama.exe file.

This is weird then, because the process stays in memory for longer than 5 minutes.
I expect the process to unload itself after 5 minutes after chatd.exe process was terminated, but that is not happening.
image

@BruceMacD In any case, I was unable to reproduce an error, mentioned in the first message of this ticket, so probably it can be closed.

@fezzzza
Copy link

fezzzza commented Mar 6, 2024

This is weird then, because the process stays in memory for longer than 5 minutes.
I expect the process to unload itself after 5 minutes after chatd.exe process was terminated, but that is not happening.

The service stays running but it unloads the model from RAM after 5 minutes. You should be able to monitor the memory usage drop as it unloads the model.

@BruceMacD
Copy link
Owner

Ah, good catch. You're both correct. Ollama normally stay running in the background, but in the case of chatd I try to clean up the process if chatd started it, so that is a Windows bug in this case. Opening a new issue for that one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants