-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add binary support for Nvidia Jetson Orin - JetPack 6 #2408
Comments
Just echoing the above issue. I've attempted to run the docker container for ollama. Running the docker with this parameter (as instructed): does not work. Per the above user's comment, JetPack, CUDA is all available but only CPU processing works with the container. I've tried this docker parameter invocation and this doesn't work either: Thank you |
+1 |
This is by no means solved yet but I'm now monitoring this issue you may want to follow too |
On my Jetson Xavier AGX with jetpack 5.1, Till Version 0.1.17 Ollama worked fine on GPU. it allways installs the actual version (0.1.25) Something was changed after 0.1.17 so the GPU is not seen anymore bei ollama |
@telemetrieTP23 Look here #1979 |
Doesn't work on Jetson Nano with Jetpack 4.6 meaning GPU is not used after using instructions provided on https://github.com/ollama/ollama/blob/main/docs/tutorials/nvidia-jetson.md |
Yes this Jetson Nano devices with 4Gb of RAM are capable to run pretty large amount of models, from BERT to 7B LLMs with quantization. It's pretty sad to see that single board computer launched in 2019 nearly useless to run language related models. edit: after few days of research it looks like Nvidia Jetson Nano GPUs ancient architecture and ancient toolchain provided by Nvidia rendering it's nearly impossible to run language related models on them. You basically can't even use pytorch > 1.10 on it. |
This should now be fixed with merge of #2279 |
Hey thanks everyone so much! So just want to confirm the merge is complete and if I update Ollama, Jetson GPU should now be supported, would that be the same for the docker image or should I just run the installer to save more headache, once again thank you everyone! |
@MrDelusionAI I have not done anything with containers yet. I’m still digging through dusty-nv’s container resources to figure it out, I have been concentrating on getting the binary to work on bare metal. I don’t think containers work yet due to how quirky containers are on Jetson devices with GPU support. If you pull the repo and compile it, that binary should work on your Jetson. I think Jetson support will be in their next binary release (0.30?). Keep checking their releases if you don’t want to self-compile. |
@MrDelusionAI If you want to build your own Ollama container to run as a service on a Jetson device, please see this. I tested it on my Jetson Orin Nano 8gb running L4T r35.4.1. Please let me know if you have any issues. |
Oh great thanks, I will try both the binary when its pushed into the main version and container as service from your link. Im running Jetpack 6 so will follow your guidance. Thanks for everyones efforts! |
The pre-release for 0.1.30 is available now, and contains @remy415's change. I don't have a Jetson yet so I can't validate the build, but folks should give it a spin and let us know how it goes. |
I copied the binary from the 0.1.30-rc4 container and it had some issues running. I did notice you pushing ARM changes so I’ll try again when the container is updated. I haven’t tried pulling the binary directly, I will do so when I get home. I’m also replicating the ARM build workflow in the centos containers, I’ll report back when I have an update. |
Awesome, it looks promising. Just tried the rc4 arm binary this morning. It seems the GPU is detected, CUDA is bound and offloading works. Though, after running a model, the process stucks in a long-running loop (high CPU load). No prompt served. After several minutes it crashes. |
@dhiltgen I copied the syntax from your workflow for ARM (cuda centos container + commands). It compiled on my Jetson, found the GPU, and then crashed similarly to what was reported above. I have a hunch it may be related to how the two OSs compile the binary (centos vs ubuntu 20.04). I turned on as much debugging as I could and ran a binary compiled with the Centos container workflow vs the binary I compiled directly on my Jetson:
I'll play around a bit with compilers and see if I can get the Centos container to compile a binary that works on the Jetson. |
Another possibility is cuda version. We're trying to link against v11 to have broader support, but maybe only v12 works on these devices? |
I thought about that too, but from what I could tell: CUDA toolkits are "future compatible", meaning everything that works on v11 works on v12+. CUDA drivers are "backwards compatible". I compile it on my Jetson with v11.4, and that binary should work on systems with v12. I did notice Ubuntu 22.04 was used to compile the runtime binary, maybe it's a GCC -> nvcc thing. |
I don't know enough about gcc/C compiling to make heads or tails from this, do you see anything helpful here? Compiled natively:
Downloaded from the rc-4 releases page:
|
I'm not sure if it will work, but you can try setting LD_LIBRARY_PATH to include the path to the cuda libs before starting ollama and see if it picks up the v12 library. (some minor code changes might be required to get this fully sorted out though) |
The latest Jetpack release supports CUDA Toolkit 12, I haven’t had time to flash my devices as had a beta release this month and I haven’t checked if it’s gone live yet. Jetpack 5 doesn’t support CUDA 12, and the underlying OS (ie Linux headers), the Nv driver, CUDA toolkit, etc are all static and not able to be upgraded. Jetpack 6 is supposed to change this, maybe that’s all I need to do is upgrade. I just checked and JP6 is still in developer preview. |
Regarding your idea @dhiltgen on including the LD_LIBRARY_PATH, tried that before. It finds the cuda12 lib, but seems to prefer the packaged one (cuda11). Building ollama natively on Jetpack 6 DP (Cuda 12) by following the generate/build workflow did work. It finds and packages the correct cuda libs to the binary. The resulting one binary works as expected. |
@dhiltgen Does the workflow build container for ARM64 have to be Centos/Rocky based or can you use the ubuntu 20.04 one? When I built the binary using the Centos container, I had the same issue as the downloaded binary. When I used Example dockerfile and dependency script here (note that I cloned the ollama repo into the folder I built the container in so that I didn't have to git clone inside the container build) |
Great to hear building from source does still work. So we just need to figure out how to get the official builds working.
The problem is glibc versions. Ubuntu generally tends to be more up-to-date, but that means Go binaries you compile on that wont work on older distros. We try to compile on an older base to maximize compatibility of the resulting pre-built binaries. Once we can figure out what the right combination is, we may have to synthesize the arm cuda container base image and tools instead of relying on official nvidia ones hosted on docker hub. |
The Ubuntu 20.04 on Jetpack 5 has gcc 10.5 (shown as compatible with gcc 9.6) and is running glibc 2.31. Not sure what versions the CUDA 11-3 Ubuntu container is running but it’s likely comparable or close. Would that work for this purpose? |
Added note: Couldn't find a CUDA Centos 7 ARM64 container (AMD64 only). nvidia/cuda:11.3.1-devel-rockylinux8 runs GCC 8.5.0, glibc 2.28 |
@remy415 I just got a Jetson Orin, so I'm able to test now. What I'm seeing is a hang during model load. I tried compiling with a few different cuda versions, but none worked (v12's reported the Jetson's driver being too old - my setup has v11.4). I was able to get it running with a little live surgery: Set LD_LIBRARY_PATH to include the cuda from the host, start ollama, wait for it to extract the runners, manually remove all the cuda libraries |
I'm checking into glibc version compatibility, and also looking at the output of readelf for the various binaries I've collected. Also, kinda weird but your production binary is ~100mb smaller than the one I compile on the Jetson (even considering I don't compile for CPU and I don't have ROCM builds) |
I tried installing an updated toolkit (CUDA-TOOLKIT-11-4 in Rockylinux8) and got this error when trying to run:
Still digging through the CUDA error, not finding anything promising; guessing it's a toolkit version mismatch issue. Adding note to previous comment: libcublas 11.5.1.109 -> 11.6.6.84 added 42Mb
|
I've tried building with cuda-toolkit-11-3, 11-4, 11-7, 11-8; they all have the same failure. I reached out to dusty-nv via email to see if he has any insights into compiling for Tegra on non-Tegra devices, if I hear anything back I will update here. |
@dhiltgen I spoke with dusty-nv, the engineer at NVidia that manages the Jetson container stack. Here’s what he had to say:
If it helps, the upstream llama.cpp repo uses Ubuntu 20.04 and Ubuntu 22.04 to build their Linux binaries. |
I've adjusted the behavior of the system with the upcoming 0.1.32 release so that we'll load the cuda library from the LD_LIBRARY_PATH before our bundled version, which should help mitigate this. As long as you include the cuda lib dir in your LD_LIBRARY_PATH for the ollama server, it should work. Ultimately I'd still like to get an older glibc based build setup defined that has a cuda library that works on Jetson, so I'll keep this issue open for now. |
@dhiltgen I am on ollama 0.1.32 with jetson orin 8gb. I tried with cuda 11.4 and updated to 12.2 What I noticed is that after updating to 0.1.32, now instead of just crashing, it throws the below exception and fallback to CPU. I wonder if its my setup or its still an issue.
|
@Ahmad-Bunni Which version of Jetpack are you running? JP5 should have cuda-11.4 installed, JP6 should have cuda-12.2. If you are running JP5, please uninstall cuda-12 related packages and run it again. |
@remy415 Thank you it works! like you mentioned by default JP5 comes with cuda 11.4, however, that does not work before I upgraded ollama to 0.1.32. Now rolling back my cuda to 11.4 + ollama 0.1.32 works like charm cheers! |
@Ahmad-Bunni Awesome, I'm glad you got it to work. While you're at it, also check out dusty-nv's Docker container setup for Jetsons: dusty-nv. Good luck and let me know if you need anything |
@MrDelusionAI can you make sure your LD_LIBRARY_PATH contains a directory with the cuda libraries? I believe the failure you're seeing may be due to us using our bundled ARM64 cudart library which for some reason isn't compatible with Jetsons. |
@dhiltgen I can confirm that I am hitting the same issue. Mine used to work with Jetpack 5, cuda 11.4 and 0.1.32. However, after upgrading to Jetpack 6 which uses cuda 12.2 I have the issue again on 0.1.33. The output of |
To clarify, it sounds like Jetpack 5 systems with cuda v11 are now working properly, however cuda v12 based systems are not working properly. Is that correct? If that's accurate, then that sort of makes sense. We're compiled with v11, and recent changes in the way we're handling the PATH/LD_LIBRARY_PATH means we're favoring the system cuda library, and given the same major version, using the host version on Jetson makes it work properly. (However for other users on other systems this change seems to be resulting in regressions). For the v12 systems, since we're compiled against v11, the v12 host library isn't working (at least that sounds plausible if my understanding is correct.) |
@Ahmad-Bunni @dhiltgen I just flashed my Jetson Orin Nano with 36.3.0 yesterday and got everything up and running (docker, container runtime, etc). I installed Ollama with the script and when I ran it, it just sat there loading indefinitely (I let it try to load tinyllama for ~10 minutes). No crashes, no errors. I don't think it loaded the libcudart in the LD_LIBRARY_PATH at I'm slightly fuzzy on how CUDA handles driver versions; I was under the impression you could compile something with CUDA 11-4 and it would work on future CUDA versions; what might be happening here (and this is just speculation, I'll need to test on my end at least) is that it's compiled for 11-4, and it loads the 11-4 library, but I have 12.2 installed. Would you be able to throw in a 12-2 build from your CI pipeline for JP6 users to test? |
@dhiltgen I went ahead and built the binary myself on the JP6/36.3.0 Jetson and it worked just fine, so it's just the script-installed binary that doesn't work |
@remy415 Spot on. I was able to see what is loaded during I'm almost certain building the binary on JP6 would work for me too just because it would be 12.x. I think the main problem here is that it relies on whats bundled rather than using what's set in |
I just want to clarify that it does
Yes, it is loading the bundled libraries — that was the original point. According to the nVidia documentation, it should work though (source: https://docs.nvidia.com/deploy/cuda-compatibility/): I’ll dig around the documentation some more |
I believe Ollama is a great project, I have tried different ideas to try get Ollama to utilise the GPU, but still uses CPU.
I have currently flashed Jetpack 6 DP onto the AGX ORIN Dev Kit. I believe this jetpack version will help Ollama use the GPU easier, if you are able to add support for it.
Thank you
The text was updated successfully, but these errors were encountered: