Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Termux? #721

Closed
GameOverFlowChart opened this issue Oct 6, 2023 · 24 comments
Closed

Termux? #721

GameOverFlowChart opened this issue Oct 6, 2023 · 24 comments

Comments

@GameOverFlowChart
Copy link

Can this run in Termux, and if yes can we get instructions to install and run it in Termux?

@platinaCoder
Copy link

I tried to manually install but you need a rooted phone. Without root it's not possible with the normal installation, I'll keep trying and report back.

@ManzoniGiuseppe
Copy link

I do not have a rooted phone and proot doesn't work, so I tried the following. I downloaded the debian fs from the proot-distro's github

curl -L https://github.com/termux/proot-distro/releases/download/v3.12.1/debian-aarch64-pd-v3.12.1.tar.xz -o debian-aarch64-pd-v3.12.1.tar.xz

and unzipped it in ~/debian. I downloaded the ollama executable by following the manual install.

curl -L https://ollama.ai/download/ollama-linux-arm64 -o ../usr/bin/ollama
chmod +x ../usr/bin/ollama

The problem is that the dynamic linker and the shared lib indicated by ollama are actually in a subdir (where the debian fs is) so I used patchelf to fix their location.

patchelf --set-interpreter /data/data/com.termux/files/home/debian/lib/ld-linux-aarch64.so.1 ../usr/bin/ollama
patchelf --set-rpath /data/data/com.termux/files/home/debian/lib/aarch64-linux-gnu/ ../usr/bin/ollama

with this i get ollama to start but it immediatly dies with Segmentation fault even with the --help argument. I don't know if I should change something else too or if ollama just crashes if the environment is different from what it expects (and so a bug) but I post this in case anyone knows or finds it useful.

@platinaCoder
Copy link

Thanks for the info, interesting. I tried a lot to get it to work but even building from scratch with go seemed to go wrong because of a /bin/*/usr/bin/something error.

Easiest to a model on a smartphone is to build llama.cpp. Works fine for me and gives me a lot of freedom with extensions etc. Downside is the ease of deploying a model which ollama does very well. I might return to this and try to get it to work... I even tried to change the install script without success

@romanovj
Copy link

romanovj commented Oct 21, 2023

no problems on sdm662 with 4gb ram, Android13, native termux
Screenshot_20231021-233659_Termux

git clone --depth 1 https://github.com/jmorganca/ollama
cd ollama
go generate ./...
go build .
./ollama serve &
./ollama run orca-mini

@platinaCoder
Copy link

no problems on sdm662 with 4gb ram, Android13, native termux
Screenshot_20231021-233659_Termux

git clone --depth 1 https://github.com/jmorganca/ollama
cd ollama
go generate ./...
go build .
./ollama serve &
./ollama run orca-mini

This worked! Awesome! I totally forgot to do 'go generate'. Runs fine now.

@mxyng
Copy link
Contributor

mxyng commented Oct 25, 2023

There's currently no official plans on building an Android release so I'm going to close this for now. Anyone interested can follow the steps described above to build from source.

@mxyng mxyng closed this as not planned Won't fix, can't repro, duplicate, stale Oct 25, 2023
@codrutpopescu
Copy link

I get an error here:

~/ollama $ go build .
# github.com/jmorganca/ollama/llm
cgo-gcc-prolog:153:33: warning: unused variable '_cgo_a' [-Wunused-variable]
cgo-gcc-prolog:165:33: warning: unused variable '_cgo_a' [-Wunused-variable]
# github.com/jmorganca/ollama/llm
dynamic_shim.c:62:15: error: use of undeclared identifier 'RTLD_DEEPBIND'
dynamic_shim.c:8:54: note: expanded from macro 'LOAD_LIBRARY'

@lainedfles
Copy link
Contributor

@codrutpopescu I was actually just about to share this patch:

diff --git a/llm/dynamic_shim.c b/llm/dynamic_shim.c
index 8b5d67c..2660eb9 100644
--- a/llm/dynamic_shim.c
+++ b/llm/dynamic_shim.c
@@ -5,7 +5,11 @@
 
 #ifdef __linux__
 #include <dlfcn.h>
+#ifdef __TERMUX__
+#define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_LAZY)
+#else
 #define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_DEEPBIND)
+#endif
 #define LOAD_SYMBOL(handle, sym) dlsym(handle, sym)
 #define LOAD_ERR() dlerror()
 #define UNLOAD_LIBRARY(handle) dlclose(handle)

This allows it to compile under Termux but may break the GPU accelerated modules (perhaps only ROCm?). It my case this is sufficient since ollama doesn't presently support Vulkan nor OpenCL directly anyway. CPU-only performance is pretty good on my Pixel 7 Pro with small models like tinyllama.

@codrutpopescu
Copy link

I have managed to compile it using

git clone -b v0.1.16 --depth 1 https://github.com/jmorganca/ollama

I received some warning but at least it finished building.
Let me know when the patch is applied in github and I will try to rebuild it
I am using Galaxy Tab S9 Ultra which has 16 GB of memory and a Snapdragon 8 Gen 2

@lainedfles
Copy link
Contributor

Since there's no official support for Android (or Termux) planned, I didn't bother to submit a pull request but I can if the maintainers are open to it. Until then you'll have to apply the patch yourself or build the outdated version(s).

@codrutpopescu
Copy link

Sorry, how do I apply this patch you created?

@lainedfles
Copy link
Contributor

  1. Clone a fresh origin/main branch and navigate to the new directory: git clone --depth 1 https://github.com/jmorganca/ollama && cd ollama
  2. Create a new file named ollama_termux_dynamic_shim.patch.txt & paste the contents from my prior comment OR download ollama_termux_dynamic_shim.patch.txt: wget https://github.com/jmorganca/ollama/files/13847788/ollama_termux_dynamic_shim.patch.txt
  3. Apply the patch: patch -p1 <ollama_termux_dynamic_shim.patch.txt
  4. Continue build as per the previous comments in this ticket

@codrutpopescu
Copy link

codrutpopescu commented Jan 6, 2024

Worked like a charm. Thank you very much!!!
Here are the steps for someone who would be interested in this:

pkg install golang

git clone --depth https://github.com/jmorganca/ollama

cd ollama

curl -LJO https://github.com/jmorganca/ollama/files/13847788/ollama_termux_dynamic_shim.patch.txt

patch -p1 < ollama_termux_dynamic_shim.patch.txt

go generate ./...

go build .

./ollama serve &

./ollama run mistral

@wviana
Copy link

wviana commented Jan 16, 2024

Hi there. I think this dynamic_shim.c file was moved. This diff applies to the new file

This is the diff so I could run go build .

diff --git a/llm/dyn_ext_server.c b/llm/dyn_ext_server.c
index 111e4ab..10487e1 100644
--- a/llm/dyn_ext_server.c
+++ b/llm/dyn_ext_server.c
@@ -5,7 +5,11 @@
 
 #ifdef __linux__
 #include <dlfcn.h>
+#ifdef __TERMUX__
+#define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_LAZY)
+#else
 #define LOAD_LIBRARY(lib, flags) dlopen(lib, flags | RTLD_DEEPBIND)
+#endif
 #define LOAD_SYMBOL(handle, sym) dlsym(handle, sym)
 #define LOAD_ERR() strdup(dlerror())
 #define UNLOAD_LIBRARY(handle) dlclose(handle)

But I'm getting error when running a model. Here is the error I got trying orca-mini

2024/01/16 22:46:59 [Recovery] 2024/01/16 - 22:46:59 panic recovered:
POST /api/chat HTTP/1.1
Host: 127.0.0.1:11434
Accept: application/x-ndjson
Accept-Encoding: gzip
Content-Length: 62
Content-Type: application/json
User-Agent: ollama/0.0.0 (arm64 android) Go/go1.21.6


runtime error: invalid memory address or nil pointer dereference
/data/data/com.termux/files/usr/lib/go/src/runtime/panic.go:261 (0x64957fc29b)
        panicmem: panic(memoryError)
/data/data/com.termux/files/usr/lib/go/src/runtime/signal_unix.go:861 (0x64957fc268)
        sigpanic: panicmem()
/data/data/com.termux/files/home/ollama/gpu/gpu.go:122 (0x6495b06718)
        GetGPUInfo: if gpuHandles.cuda != nil {
/data/data/com.termux/files/home/ollama/gpu/gpu.go:190 (0x6495b0762f)
        CheckVRAM: gpuInfo := GetGPUInfo()
/data/data/com.termux/files/home/ollama/llm/llm.go:47 (0x6495b0b027)
        New: vram, _ := gpu.CheckVRAM()
/data/data/com.termux/files/home/ollama/server/routes.go:84 (0x6495cb8daf)
        load: llmRunner, err := llm.New(workDir, model.ModelPath, model.AdapterPaths, model.ProjectorPaths, opts)
/data/data/com.termux/files/home/ollama/server/routes.go:1061 (0x6495cc2853)
        ChatHandler: if err := load(c, model, opts, sessionDuration); err != nil {
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495cc15e7)
        (*Context).Next: c.handlers[c.index](c)
/data/data/com.termux/files/home/ollama/server/routes.go:880 (0x6495cc15cc)
        (*Server).GenerateRoutes.func1: c.Next()
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495ca00df)
        (*Context).Next: c.handlers[c.index](c)
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/recovery.go:102 (0x6495ca00c4)
        CustomRecoveryWithWriter.func1: c.Next()
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495c9f47f)
        (*Context).Next: c.handlers[c.index](c)
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/logger.go:240 (0x6495c9f448)
        LoggerWithConfig.func1: c.Next()
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/context.go:174 (0x6495c9e5b3)
        (*Context).Next: c.handlers[c.index](c)
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:620 (0x6495c9e2dc)
        (*Engine).handleHTTPRequest: c.Next()
/data/data/com.termux/files/home/go/pkg/mod/github.com/gin-gonic/gin@v1.9.1/gin.go:576 (0x6495c9deff)
        (*Engine).ServeHTTP: engine.handleHTTPRequest(c)
/data/data/com.termux/files/usr/lib/go/src/net/http/server.go:2938 (0x6495a3aaeb)
        serverHandler.ServeHTTP: handler.ServeHTTP(rw, req)
/data/data/com.termux/files/usr/lib/go/src/net/http/server.go:2009 (0x6495a36ee7)
        (*conn).serve: serverHandler{c.server}.ServeHTTP(w, w.req)
/data/data/com.termux/files/usr/lib/go/src/runtime/asm_arm64.s:1197 (0x6495819183)
        goexit: MOVD    R0, R0  // NOP

I Will try to update with latest and test again.

Couldn't we pull request this change to ollama project?

@lainedfles
Copy link
Contributor

Actually, I've already created a pull request. Don't use the aforementioned patch, give #1999 a try.

@wviana
Copy link

wviana commented Jan 16, 2024

@lainedfles thanks. I moved that line up, working fine. I was able to run mixtral in my phone. But it's so slow.

@codrutpopescu
Copy link

How can I create a diff file patch from https://github.com/jmorganca/ollama/pull/1999/files
Sorry, I am not an expert

@lainedfles
Copy link
Contributor

@codrutpopescu Github offers a very nice feature. You can append .patch to the end of pull request URLs. Try: wget https://github.com/jmorganca/ollama/pull/1999.patch

@codrutpopescu
Copy link

Amazing! Thanks

@inguna87
Copy link

@lainedfles Hi, do you happen to know what could cause this error? I have tried running orca-mini and vicuna, same error. Was libext_server.so built incorrectly? What I did was:

pkg install golang

git clone --depth 1 https://github.com/jmorganca/ollama

cd ollama

wget https://github.com/jmorganca/ollama/pull/1999.patch

patch -p1 < 1999.patch

go generate ./...

go build .

./ollama serve &

./ollama run orca-mini
Screenshot_20240118-224954

Thank you

@lainedfles
Copy link
Contributor

@inguna87 That looks like some kind of linker problem. There has recently been significant merges including #1999. Maybe you cloned while the repo was still being updated. I'd recommend a fresh clone and validation that Termux is up-to-date (pkg upgrade).

Since #1999 has been merged, patching is no long required. I've just tested using the main branch. It builds and runs successfully for me. Here is my process:

  1. Clone without --depth 1 so that updates (and tag checkout) are easier (git pull): git clone https://github.com/jmorganca/ollama && cd ollama
  2. I like to observe how long operations require so I use time with the generate command: time go generate ./...
  3. Build: time go build .
  4. Screen (or tmux) makes it easy to background and re-attach: screen -S ollama ~/ollama/ollama serve
  5. Test (note that I've added my ollama directory to the shell PATH variable): ollama run orca-mini

Good luck!

@codrutpopescu
Copy link

codrutpopescu commented Jan 19, 2024

Questions to @lainedfles since it seems you are the Lead Engineer for Termux and we are grateful for that.

  1. When compiling we get these warnings:
    warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]
    These can be safely ignored, right? I always wondered when compiling blindly code what is the effect of these warnings.
  2. There are powerful CPUs noawaday, for example I am running this on a Tab S9 Utra which has a Snapdragon 8 Gen 2 CPU and 16 GB of memory. These CPUs seems to have some AI built in features, Gen 3 has even more. Is there any chance that someday we will have some hardware acceleration?
    Thank you for your support and everything!

@inguna87
Copy link

@inguna87 That looks like some kind of linker problem. There has recently been significant merges including #1999. Maybe you cloned while the repo was still being updated. I'd recommend a fresh clone and validation that Termux is up-to-date (pkg upgrade).

Since #1999 has been merged, patching is no long required. I've just tested using the main branch. It builds and runs successfully for me. Here is my process:

  1. Clone without --depth 1 so that updates (and tag checkout) are easier (git pull): git clone https://github.com/jmorganca/ollama && cd ollama
  2. I like to observe how long operations require so I use time with the generate command: time go generate ./...
  3. Build: time go build .
  4. Screen (or tmux) makes it easy to background and re-attach: screen -S ollama ~/ollama/ollama serve
  5. Test (note that I've added my ollama directory to the shell PATH variable): ollama run orca-mini

Good luck!

Thank you! It worked.
I cloned without "--depth 1" this time and because your patch was merged - it succeeded.
I appreciate your help

@lainedfles
Copy link
Contributor

Questions to @lainedfles since it seems you are the Lead Engineer for Termux and we are grateful for that.

1. When compiling we get  these warnings:
   `warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]`
   These can be safely ignored, right? I always wondered when compiling blindly code what is the effect of these warnings.

See my comment in the pull request

CPU-only inference works on my Pixel 7 pro (aarch64) using update-to-date Termux (F-Droid) running on top of GrapheneOS (based on AOSP Android 14). I've not attempted to build with the NDK directly but Termux doesn't provide a native GCC compiler (nor do modern NDKs), it uses Clang with GCC compatibility mode. This produces warnings like: implicit conversion increases floating-point precision which I suspect affects newer model quantization formats.

So far, I've had decent success (albeit slow) with the legacy q4 and q5 formats but K_S & K_M not so much. It will be nice if working Vulkan and/or TPU support can eventually be added. Otherwise, without RTLD_DEEPBIND, the build succeeds and the dynamic CPU module loads successfully.

2. There are powerful CPUs noawaday, for example I am running this on a Tab S9 Utra which has a Snapdragon 8 Gen 2 CPU and 16 GB of memory. These CPUs seems to have some AI built in features, Gen 3 has even more. Is there any chance that someday we will have some hardware acceleration?
   Thank you for your support and everything!

I'd suggest that there's a good chance we'll eventually see acceleration for mobile NPUs & TPUs. However often times these are very limited on mobile devices in core count and memory and intended for less demanding operations like "AI image filtering" for cameras, not for LLM inference. My bet is that Vulkan support is the most realistic acceleration.

I should set expectations appropriately, I just enjoy tinkering and find contributing to open-source software fulfilling. I'm not an expert, the true engineers built & maintain this project. That being said, if possible I will make an attempt to help in the future.

And now I'll share a bit more about my setup, I'm quite happy with the current state of chatbot-ollama as it functions under Termux. I fire up this & Ollama in screen and use my browser (Firefox or Vanadium) to interact with the Ollama API. Fun!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants