-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confusing error on linux with noexec on /tmp - Error: llama runner process no longer running: 1 #4105
Comments
Can you share your server log? https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md |
Thanks for your response, @dhiltgen ! Here are the logs: journalctl -u ollama |
Hmm... If that turns out to be the problem, a workaround is documented here: https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#linux-tmp-noexec |
Thanks @dhiltgen ! That seems like the problem. This is what I see: /tmp/ollama2753723410/runners/cuda_v11/ollama_llama_server --help How do I know which location user ollama runs as is allowed to write to? I tried a couple of random locations but got the same error. I am sorry but I don't think I understand what the workaround is doing. |
Did you try the suggested location in the troublshooting doc What this setting is doing is changing where we write out temporary files, including subprocess executables we run. |
Thanks @dhiltgen ! I did. Just to clarify, I need to set that as an environment variable using the command "export OLLAMA_TMPDIR=/usr/share/ollama/", right? |
@dhiltgen , do you have any other suggestions? Is there a way to uninstall everything and try again from scratch? |
@dhiltgen , just noticed this in server logs: May 03 14:53:47 anurag-Legion-T5-26IRB8 ollama[771241]: /tmp/ollama1518381580/runners/cuda_v11/ollama_llama_server: /usr/local/cuda/lib64/libcublas.so.11: version `libcublas.so.11' not found (required by /tmp/ollama1518381580/runners/cuda_v11/ollama_llama_server) Does this help? I checked but this file exists: |
You should set If you want to uninstall/re-install see https://github.com/ollama/ollama/blob/main/docs/linux.md#uninstall For the missing CUDA library, the system should handle this automatically, however you might want to try updating the LD_LIBRARY_PATH for the server to include your cuda library directory from the screenshot to see if that helps. |
Thanks @dhiltgen !
This is how the file looks after the changes: Now I am getting a time-out error. Here are server logs: May 06 11:33:10 anurag-Legion-T5-26IRB8 systemd[1]: Started Ollama Service. Can you please advise what might be wrong? Thanks for your assistance! |
It looks like we may have a bug in wiring up the LD_LIBRARY_PATH properly when OLLAMA_TMPDIR is set. Investigating... |
@utility-aagrawal can you try 0.1.34? |
Thanks for your response, @dhiltgen ! I was able to make it work. For me, the issue wasn't ollama related. I have cuda12 on my machine but I had both libcublas.so.11 and libcublas.so.12. I remember creating a symbolic link from 12 to 11 for some other program to run. As soon as I removed libcublas.so.11, ollama worked. I can confirm that it still works with the latest version 0.1.34. Thanks again for your help! |
Closing this issue. Thanks! |
What is the issue?
I installed ollama on my ubuntu 22.04 machine using the command curl -fsSL https://ollama.com/install.sh | sh
I ran : ollama run llama3 and got this error:
Error: llama runner process no longer running: 1
Can someone help me resolve it?
OS
Linux
GPU
Nvidia
CPU
Intel
Ollama version
0.1.32
The text was updated successfully, but these errors were encountered: