-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Termux? #721
Comments
I tried to manually install but you need a rooted phone. Without root it's not possible with the normal installation, I'll keep trying and report back. |
I do not have a rooted phone and proot doesn't work, so I tried the following. I downloaded the debian fs from the proot-distro's github
and unzipped it in
The problem is that the dynamic linker and the shared lib indicated by ollama are actually in a subdir (where the debian fs is) so I used patchelf to fix their location.
with this i get |
Thanks for the info, interesting. I tried a lot to get it to work but even building from scratch with go seemed to go wrong because of a /bin/*/usr/bin/something error. Easiest to a model on a smartphone is to build llama.cpp. Works fine for me and gives me a lot of freedom with extensions etc. Downside is the ease of deploying a model which ollama does very well. I might return to this and try to get it to work... I even tried to change the install script without success |
There's currently no official plans on building an Android release so I'm going to close this for now. Anyone interested can follow the steps described above to build from source. |
I get an error here:
|
@codrutpopescu I was actually just about to share this patch:
This allows it to compile under Termux but may break the GPU accelerated modules (perhaps only ROCm?). It my case this is sufficient since ollama doesn't presently support Vulkan nor OpenCL directly anyway. CPU-only performance is pretty good on my Pixel 7 Pro with small models like tinyllama. |
I have managed to compile it using git clone -b v0.1.16 --depth 1 https://github.com/jmorganca/ollama I received some warning but at least it finished building. |
Since there's no official support for Android (or Termux) planned, I didn't bother to submit a pull request but I can if the maintainers are open to it. Until then you'll have to apply the patch yourself or build the outdated version(s). |
Sorry, how do I apply this patch you created? |
|
Worked like a charm. Thank you very much!!!
|
Hi there. I think this This is the diff so I could run diff --git a/llm/dyn_ext_server.c b/llm/dyn_ext_server.c
index 111e4ab..10487e1 100644
--- a/llm/dyn_ext_server.c
+++ b/llm/dyn_ext_server.c
@@ -5,7 +5,11 @@
#ifdef __linux__
#include <dlfcn.h>
+#ifdef __TERMUX__
+#define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_LAZY)
+#else
#define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_DEEPBIND)
+#endif
#define LOAD_SYMBOL(handle, sym) dlsym(handle, sym)
#define LOAD_ERR() strdup(dlerror())
#define UNLOAD_LIBRARY(handle) dlclose(handle) But I'm getting error when running a model. Here is the error I got trying orca-mini
I Will try to update with latest and test again. Couldn't we pull request this change to ollama project? |
Actually, I've already created a pull request. Don't use the aforementioned patch, give #1999 a try. |
@lainedfles thanks. I moved that line up, working fine. I was able to run mixtral in my phone. But it's so slow. |
How can I create a diff file patch from https://github.com/jmorganca/ollama/pull/1999/files |
@codrutpopescu Github offers a very nice feature. You can append .patch to the end of pull request URLs. Try: |
Amazing! Thanks |
@lainedfles Hi, do you happen to know what could cause this error? I have tried running orca-mini and vicuna, same error. Was libext_server.so built incorrectly? What I did was: pkg install golang git clone --depth 1 https://github.com/jmorganca/ollama cd ollama wget https://github.com/jmorganca/ollama/pull/1999.patch patch -p1 < 1999.patch go generate ./... go build . ./ollama serve & Thank you |
@inguna87 That looks like some kind of linker problem. There has recently been significant merges including #1999. Maybe you cloned while the repo was still being updated. I'd recommend a fresh clone and validation that Termux is up-to-date ( Since #1999 has been merged, patching is no long required. I've just tested using the main branch. It builds and runs successfully for me. Here is my process:
Good luck! |
Questions to @lainedfles since it seems you are the Lead Engineer for Termux and we are grateful for that.
|
Thank you! It worked. |
See my comment in the pull request
I'd suggest that there's a good chance we'll eventually see acceleration for mobile NPUs & TPUs. However often times these are very limited on mobile devices in core count and memory and intended for less demanding operations like "AI image filtering" for cameras, not for LLM inference. My bet is that Vulkan support is the most realistic acceleration. I should set expectations appropriately, I just enjoy tinkering and find contributing to open-source software fulfilling. I'm not an expert, the true engineers built & maintain this project. That being said, if possible I will make an attempt to help in the future. And now I'll share a bit more about my setup, I'm quite happy with the current state of chatbot-ollama as it functions under Termux. I fire up this & Ollama in screen and use my browser (Firefox or Vanadium) to interact with the Ollama API. Fun! |
Can this run in Termux, and if yes can we get instructions to install and run it in Termux?
The text was updated successfully, but these errors were encountered: