Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add binary support for Nvidia Jetson Orin - JetPack 6 #2408

Open
MrDelusionAI opened this issue Feb 8, 2024 · 46 comments
Open

Add binary support for Nvidia Jetson Orin - JetPack 6 #2408

MrDelusionAI opened this issue Feb 8, 2024 · 46 comments
Assignees
Labels
nvidia Issues relating to Nvidia GPUs and CUDA

Comments

@MrDelusionAI
Copy link

I believe Ollama is a great project, I have tried different ideas to try get Ollama to utilise the GPU, but still uses CPU.
I have currently flashed Jetpack 6 DP onto the AGX ORIN Dev Kit. I believe this jetpack version will help Ollama use the GPU easier, if you are able to add support for it.

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:08:11_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
nvidia-smi
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.2.0                Driver Version: N/A          CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Orin (nvgpu)                  N/A  | N/A              N/A |                  N/A |
| N/A   N/A  N/A               N/A /  N/A | Not Supported        |     N/A          N/A |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

Thank you

@davidtheITguy
Copy link

Just echoing the above issue. I've attempted to run the docker container for ollama. Running the docker with this parameter (as instructed):
--gpus=all

does not work. Per the above user's comment, JetPack, CUDA is all available but only CPU processing works with the container.

I've tried this docker parameter invocation and this doesn't work either:
docker run --runtime nvidia ...

Thank you

@ladrians
Copy link

+1

@davidtheITguy
Copy link

This is by no means solved yet but I'm now monitoring this issue you may want to follow too

#1979

@telemetrieTP23
Copy link

On my Jetson Xavier AGX with jetpack 5.1, Till Version 0.1.17 Ollama worked fine on GPU.
But now on the new Jetson Orin AGX it is even not possible to install a specific version (0.1.17) with this commando:
curl -fsSL https://ollama.com/install.sh | sed 's#https://ollama.com/download#https://github.com/jmorganca/ollama/releases/download/v0.1.17#' | sh

it allways installs the actual version (0.1.25)

Something was changed after 0.1.17 so the GPU is not seen anymore bei ollama

@davidtheITguy
Copy link

@telemetrieTP23 Look here #1979

@klimchuk
Copy link

Doesn't work on Jetson Nano with Jetpack 4.6 meaning GPU is not used after using instructions provided on https://github.com/ollama/ollama/blob/main/docs/tutorials/nvidia-jetson.md

@davidtheITguy
Copy link

@klimchuk Not sure if the fix will support Jetpack 4.6 (will def work with 5.1.x), but check and read here: #1979

@oiwn
Copy link

oiwn commented Feb 28, 2024

Yes this Jetson Nano devices with 4Gb of RAM are capable to run pretty large amount of models, from BERT to 7B LLMs with quantization. It's pretty sad to see that single board computer launched in 2019 nearly useless to run language related models.

edit: after few days of research it looks like Nvidia Jetson Nano GPUs ancient architecture and ancient toolchain provided by Nvidia rendering it's nearly impossible to run language related models on them. You basically can't even use pytorch > 1.10 on it.

@bmizerany bmizerany added the nvidia Issues relating to Nvidia GPUs and CUDA label Mar 11, 2024
@remy415
Copy link
Contributor

remy415 commented Mar 25, 2024

This should now be fixed with merge of #2279

@MrDelusionAI
Copy link
Author

Hey thanks everyone so much! So just want to confirm the merge is complete and if I update Ollama, Jetson GPU should now be supported, would that be the same for the docker image or should I just run the installer to save more headache, once again thank you everyone!

@remy415
Copy link
Contributor

remy415 commented Mar 25, 2024

@MrDelusionAI I have not done anything with containers yet. I’m still digging through dusty-nv’s container resources to figure it out, I have been concentrating on getting the binary to work on bare metal. I don’t think containers work yet due to how quirky containers are on Jetson devices with GPU support.

If you pull the repo and compile it, that binary should work on your Jetson. I think Jetson support will be in their next binary release (0.30?). Keep checking their releases if you don’t want to self-compile.

@remy415
Copy link
Contributor

remy415 commented Mar 26, 2024

@MrDelusionAI If you want to build your own Ollama container to run as a service on a Jetson device, please see this. I tested it on my Jetson Orin Nano 8gb running L4T r35.4.1. Please let me know if you have any issues.

@MrDelusionAI
Copy link
Author

@MrDelusionAI If you want to build your own Ollama container to run as a service on a Jetson device, please see this. I tested it on my Jetson Orin Nano 8gb running L4T r35.4.1. Please let me know if you have any issues.

Oh great thanks, I will try both the binary when its pushed into the main version and container as service from your link. Im running Jetpack 6 so will follow your guidance.

Thanks for everyones efforts!

@dhiltgen
Copy link
Collaborator

The pre-release for 0.1.30 is available now, and contains @remy415's change. I don't have a Jetson yet so I can't validate the build, but folks should give it a spin and let us know how it goes.

https://github.com/ollama/ollama/releases/tag/v0.1.30-rc4

@remy415
Copy link
Contributor

remy415 commented Mar 26, 2024

I copied the binary from the 0.1.30-rc4 container and it had some issues running. I did notice you pushing ARM changes so I’ll try again when the container is updated. I haven’t tried pulling the binary directly, I will do so when I get home.

I’m also replicating the ARM build workflow in the centos containers, I’ll report back when I have an update.

@m8nky
Copy link

m8nky commented Mar 27, 2024

Awesome, it looks promising. Just tried the rc4 arm binary this morning. It seems the GPU is detected, CUDA is bound and offloading works. Though, after running a model, the process stucks in a long-running loop (high CPU load). No prompt served. After several minutes it crashes.
2024.03.27-ollama-jetson6.log

@remy415
Copy link
Contributor

remy415 commented Mar 27, 2024

@dhiltgen I copied the syntax from your workflow for ARM (cuda centos container + commands). It compiled on my Jetson, found the GPU, and then crashed similarly to what was reported above. I have a hunch it may be related to how the two OSs compile the binary (centos vs ubuntu 20.04). I turned on as much debugging as I could and ran a binary compiled with the Centos container workflow vs the binary I compiled directly on my Jetson:

  • Jetson
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =    70.50 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   164.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    12.00 MiB
llama_new_context_with_model: graph nodes  = 1060
llama_new_context_with_model: graph splits = 2
[1711553138] warming up the model with an empty run
{"function":"initialize","level":"INFO","line":422,"msg":"initializing slots","n_slots":1,"tid":"281471143109072","timestamp":1711553141}
{"function":"initialize","level":"INFO","line":431,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"281471143109072","timestamp":1711553141}
time=2024-03-27T15:25:41.243Z level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop"
[1711553141] llama server main loop starting
{"function":"update_slots","level":"INFO","line":1550,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"281471101694416","timestamp":1711553141}
time=2024-03-27T15:25:41.250Z level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=1 window=2048
[GIN] 2024/03/27 - 15:25:41 | 200 | 40.615734458s |       127.0.0.1 | POST     "/api/chat"
  • Centos:
..................................................................................................
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =    70.50 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   164.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    12.00 MiB
llama_new_context_with_model: graph nodes  = 1060
llama_new_context_with_model: graph splits = 2
[1711552130] warming up the model with an empty run
CUDA error: CUBLAS_STATUS_EXECUTION_FAILED
  current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /opt/ollama/llm/llama.cpp/ggml-cuda.cu:10604
  cublasGemmBatchedEx(ctx.cublas_handle(), CUBLAS_OP_T, CUBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0*ne23), CUDA_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1*ne23), CUDA_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
GGML_ASSERT: /opt/ollama/llm/llama.cpp/ggml-cuda.cu:193: !"CUDA error"
<the rest is a stack trace of the deadlocked goroutines>

I'll play around a bit with compilers and see if I can get the Centos container to compile a binary that works on the Jetson.

@dhiltgen
Copy link
Collaborator

Another possibility is cuda version. We're trying to link against v11 to have broader support, but maybe only v12 works on these devices?

@remy415
Copy link
Contributor

remy415 commented Mar 27, 2024

I thought about that too, but from what I could tell: CUDA toolkits are "future compatible", meaning everything that works on v11 works on v12+. CUDA drivers are "backwards compatible". I compile it on my Jetson with v11.4, and that binary should work on systems with v12.

I did notice Ubuntu 22.04 was used to compile the runtime binary, maybe it's a GCC -> nvcc thing.

@remy415
Copy link
Contributor

remy415 commented Mar 27, 2024

I don't know enough about gcc/C compiling to make heads or tails from this, do you see anything helpful here?

Compiled natively:

tegra@ok3d-1:~/ok3d/ollama-container/dev/bintest$ ldd ollama-jetson-native
        linux-vdso.so.1 (0x0000ffff80177000)
        libresolv.so.2 => /lib/aarch64-linux-gnu/libresolv.so.2 (0x0000ffff80104000)
        libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff800d3000)
        libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff800bf000)
        libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff7ff4c000)
        /lib/ld-linux-aarch64.so.1 (0x0000ffff80147000)

Downloaded from the rc-4 releases page:

tegra@ok3d-1:~/ok3d/ollama-container/dev/bintest$ ldd ollama-linux-arm64
        linux-vdso.so.1 (0x0000ffffac165000)
        libresolv.so.2 => /lib/aarch64-linux-gnu/libresolv.so.2 (0x0000ffffac0f2000)
        libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffffac0c1000)
        librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffffac0a9000)
        libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffffac095000)
        libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffffabeb0000)
        libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffffabe05000)
        libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffffabc92000)
        /lib/ld-linux-aarch64.so.1 (0x0000ffffac135000)
        libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffffabc6e000)

@dhiltgen
Copy link
Collaborator

I'm not sure if it will work, but you can try setting LD_LIBRARY_PATH to include the path to the cuda libs before starting ollama and see if it picks up the v12 library. (some minor code changes might be required to get this fully sorted out though)

@remy415
Copy link
Contributor

remy415 commented Mar 27, 2024

The latest Jetpack release supports CUDA Toolkit 12, I haven’t had time to flash my devices as had a beta release this month and I haven’t checked if it’s gone live yet. Jetpack 5 doesn’t support CUDA 12, and the underlying OS (ie Linux headers), the Nv driver, CUDA toolkit, etc are all static and not able to be upgraded. Jetpack 6 is supposed to change this, maybe that’s all I need to do is upgrade. I just checked and JP6 is still in developer preview.

@m8nky
Copy link

m8nky commented Mar 27, 2024

Regarding your idea @dhiltgen on including the LD_LIBRARY_PATH, tried that before. It finds the cuda12 lib, but seems to prefer the packaged one (cuda11).
source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama578764547/runners/cuda_v11/libcudart.so.11.0 /usr/local/cuda/lib64/libcudart.so.12.2.140 /usr/local/cuda/targets/aarch64-linux/lib/libcudart.so.12.2.140 /usr/local/cuda-12/targets/aarch64-linux/lib/libcudart.so.12.2.140 /usr/local/cuda-12.2/targets/aarch64-linux/lib/libcudart.so.12.2.140]"

Building ollama natively on Jetpack 6 DP (Cuda 12) by following the generate/build workflow did work. It finds and packages the correct cuda libs to the binary. The resulting one binary works as expected.

@remy415
Copy link
Contributor

remy415 commented Mar 27, 2024

@dhiltgen Does the workflow build container for ARM64 have to be Centos/Rocky based or can you use the ubuntu 20.04 one?

When I built the binary using the Centos container, I had the same issue as the downloaded binary. When I used nvidia/cuda:11.3.1-devel-ubuntu20.04, the resulting binary worked on my bare OS. To get it to run in a container properly, I had to use a dusty-nv container as my runtime, I used dustynv/build-essential:r35.4.1 as it's one of the smaller containers I could find at a miniscule 5Gb. I tried using nvidia/cuda:11.3.1-runtime-ubuntu20.04 and the 11.4.3-runtime-ubuntu20.04 containers, neither of them worked for runtime.

Example dockerfile and dependency script here (note that I cloned the ollama repo into the folder I built the container in so that I didn't have to git clone inside the container build)

@dhiltgen
Copy link
Collaborator

Great to hear building from source does still work. So we just need to figure out how to get the official builds working.

Does the workflow build container for ARM64 have to be Centos/Rocky based or can you use the ubuntu 20.04 one?

The problem is glibc versions. Ubuntu generally tends to be more up-to-date, but that means Go binaries you compile on that wont work on older distros. We try to compile on an older base to maximize compatibility of the resulting pre-built binaries. Once we can figure out what the right combination is, we may have to synthesize the arm cuda container base image and tools instead of relying on official nvidia ones hosted on docker hub.

@remy415
Copy link
Contributor

remy415 commented Mar 27, 2024

The Ubuntu 20.04 on Jetpack 5 has gcc 10.5 (shown as compatible with gcc 9.6) and is running glibc 2.31. Not sure what versions the CUDA 11-3 Ubuntu container is running but it’s likely comparable or close. Would that work for this purpose?

@remy415
Copy link
Contributor

remy415 commented Mar 28, 2024

Added note: Couldn't find a CUDA Centos 7 ARM64 container (AMD64 only). nvidia/cuda:11.3.1-devel-rockylinux8 runs GCC 8.5.0, glibc 2.28

@dhiltgen
Copy link
Collaborator

@remy415 I just got a Jetson Orin, so I'm able to test now. What I'm seeing is a hang during model load. I tried compiling with a few different cuda versions, but none worked (v12's reported the Jetson's driver being too old - my setup has v11.4). I was able to get it running with a little live surgery: Set LD_LIBRARY_PATH to include the cuda from the host, start ollama, wait for it to extract the runners, manually remove all the cuda libraries rm /tmp/ollama*/runners/cuda*/libcu* and then try to load a model, and it winds up linking to the host cuda library not our bundled version, and then it runs on the GPU. So the build and linking is producing a working executable, we're just bundling a cuda library that for some reason won't work properly on the Jetson systems. I'm not sure yet what the optimal fix is, but I'll explore alternative container base images to try to see if we can find one that balances our desire for old glibc with one that actually works on Jetsons. If I can't find one, then maybe we'll need to make some code changes to be able to use the host cuda libs in some(?) cases.

@remy415
Copy link
Contributor

remy415 commented Mar 28, 2024

I'm checking into glibc version compatibility, and also looking at the output of readelf for the various binaries I've collected.

Also, kinda weird but your production binary is ~100mb smaller than the one I compile on the Jetson (even considering I don't compile for CPU and I don't have ROCM builds)

image

@remy415
Copy link
Contributor

remy415 commented Mar 28, 2024

I tried installing an updated toolkit (CUDA-TOOLKIT-11-4 in Rockylinux8) and got this error when trying to run:

CUDA error: CUBLAS_STATUS_NOT_SUPPORTED
  current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:10604
cublasGemmBatchedEx(
    ctx.cublas_handle(), 
    CUBLAS_OP_T, 
    CUBLAS_OP_N, 
    ne01, 
    ne11,
    ne10, 
    alpha, 
    (const void **) (ptrs_src.get() + 0*ne23), 
    CUDA_R_16F, 
    nb01/nb00, 
    (const void **) (ptrs_src.get() + 1*ne23), 
    CUDA_R_16F, 
    nb11/nb10,
    beta, 
    ( void **) (ptrs_dst.get() + 0*ne23), 
    cu_data_type, 
    ne01, 
    ne23, 
    cu_compute_type, 
    CUBLAS_GEMM_DEFAULT_TENSOR_OP
)
GGML_ASSERT: /go/src/github.com/jmorganca/ollama/llm/llama.cpp/ggml-cuda.cu:193: !"CUDA error"

Still digging through the CUDA error, not finding anything promising; guessing it's a toolkit version mismatch issue.

Adding note to previous comment:
between one minor version upgrade, there is additional ~230Mb of cuda library lol

libcublas 11.5.1.109 -> 11.6.6.84 added 42Mb
libcublasLt 11.5.1.109 -> 11.6.6.84 added 128Mb
libcudart 11.3.109 -> 11.4.298 added 61Mb
Total increase: 231Mb

  • nvidia/cuda:11.3.1-devel-rockylinux8
-rwxr-xr-x 1 root root 126456824 May  4  2021 /usr/local/cuda/lib64/libcublas.so.11.5.1.109
-rwxr-xr-x 1 root root 245372792 May  4  2021 /usr/local/cuda/lib64/libcublasLt.so.11.5.1.109
-rwxr-xr-x 1 root root 638136 May  4  2021 /usr/local/cuda/lib64/libcudart.so.11.3.109
  • Jetpack 5 (ubuntu 20.04, l4t 35.4.1)
-rw-r--r-- 1 root root 168574840 Sep 19  2022 /usr/local/cuda/lib64/libcublas.so.11.6.6.84
-rw-r--r-- 1 root root 373884448 Sep 19  2022 /usr/local/cuda/lib64/libcublasLt.so.11.6.6.84
-rw-r--r-- 1 root root  699488 Sep 14  2022 /usr/local/cuda/lib64/libcudart.so.11.4.298

@remy415
Copy link
Contributor

remy415 commented Mar 29, 2024

I've tried building with cuda-toolkit-11-3, 11-4, 11-7, 11-8; they all have the same failure.

I reached out to dusty-nv via email to see if he has any insights into compiling for Tegra on non-Tegra devices, if I hear anything back I will update here.

@remy415
Copy link
Contributor

remy415 commented Mar 30, 2024

@dhiltgen I spoke with dusty-nv, the engineer at NVidia that manages the Jetson container stack. Here’s what he had to say:

That cuda:11-4-rockylinux8 container must be for ARM SBSA (ARM server), not Jetson, because NVIDIA only puts out the Jetson containers for Ubuntu. I’m not sure of all the intricacies of cross-distro compatibilities with GLIBC, CUDA, and the GCC toolchain, but suffice it to say I would recommend sticking with Ubuntu.

With Jetson moving to a model where the version of CUDA is decoupled from JetPack, for the containers I would prefer to just have a Dockerfile for ollama in jetson-containers so that it will rebuild against the CUDA Toolkit/cuDNN that you have installed (CUDA 11.4 / 12.2 / 12.4 /ect). Then ollama will automatically be built against whichever version of CUDA that you want. Of course if they are releasing wheels on their website outside of container, they can do what they need to to support that.

If it helps, the upstream llama.cpp repo uses Ubuntu 20.04 and Ubuntu 22.04 to build their Linux binaries.

@dhiltgen
Copy link
Collaborator

I've adjusted the behavior of the system with the upcoming 0.1.32 release so that we'll load the cuda library from the LD_LIBRARY_PATH before our bundled version, which should help mitigate this. As long as you include the cuda lib dir in your LD_LIBRARY_PATH for the ollama server, it should work. Ultimately I'd still like to get an older glibc based build setup defined that has a cuda library that works on Jetson, so I'll keep this issue open for now.

@Ahmad-Bunni
Copy link

@dhiltgen I am on ollama 0.1.32 with jetson orin 8gb. I tried with cuda 11.4 and updated to 12.2 ollama run still has an issue.

What I noticed is that after updating to 0.1.32, now instead of just crashing, it throws the below exception and fallback to CPU. I wonder if its my setup or its still an issue.

CUDA error: CUBLAS_STATUS_EXECUTION_FAILED
  current device: 0, in function ggml_cuda_mul_mat_batched_cublas at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:1848
  

image

@remy415
Copy link
Contributor

remy415 commented Apr 15, 2024

@Ahmad-Bunni Which version of Jetpack are you running? JP5 should have cuda-11.4 installed, JP6 should have cuda-12.2. If you are running JP5, please uninstall cuda-12 related packages and run it again.

@Ahmad-Bunni
Copy link

@remy415 Thank you it works! like you mentioned by default JP5 comes with cuda 11.4, however, that does not work before I upgraded ollama to 0.1.32. Now rolling back my cuda to 11.4 + ollama 0.1.32 works like charm cheers!

@remy415
Copy link
Contributor

remy415 commented Apr 15, 2024

@Ahmad-Bunni Awesome, I'm glad you got it to work. While you're at it, also check out dusty-nv's Docker container setup for Jetsons: dusty-nv. Good luck and let me know if you need anything

@MrDelusionAI
Copy link
Author

Afternoon All, hope you are all well, I tried a fresh install of Jetpack 6 DP on the 32GB ORIN Jetson Dev Kit, Ollama version 0.1.33, trying tinyollama, I was excited to see the GPU to be utilised however it sticks in the loading phase, then throughs up this error after :
Screenshot 2024-05-03 at 15 31 26
Screenshot 2024-05-03 at 15 34 40

Anyone had similar issues?

Thank you all

@dhiltgen
Copy link
Collaborator

dhiltgen commented May 3, 2024

@MrDelusionAI can you make sure your LD_LIBRARY_PATH contains a directory with the cuda libraries? I believe the failure you're seeing may be due to us using our bundled ARM64 cudart library which for some reason isn't compatible with Jetsons.

@Ahmad-Bunni
Copy link

@dhiltgen I can confirm that I am hitting the same issue. Mine used to work with Jetpack 5, cuda 11.4 and 0.1.32. However, after upgrading to Jetpack 6 which uses cuda 12.2 I have the issue again on 0.1.33. The output of echo $LD_LIBRARY_PATH is /usr/local/cuda-12.2/lib64

@dhiltgen
Copy link
Collaborator

dhiltgen commented May 6, 2024

To clarify, it sounds like Jetpack 5 systems with cuda v11 are now working properly, however cuda v12 based systems are not working properly. Is that correct?

If that's accurate, then that sort of makes sense. We're compiled with v11, and recent changes in the way we're handling the PATH/LD_LIBRARY_PATH means we're favoring the system cuda library, and given the same major version, using the host version on Jetson makes it work properly. (However for other users on other systems this change seems to be resulting in regressions). For the v12 systems, since we're compiled against v11, the v12 host library isn't working (at least that sounds plausible if my understanding is correct.)

@remy415
Copy link
Contributor

remy415 commented May 6, 2024

@Ahmad-Bunni @dhiltgen I just flashed my Jetson Orin Nano with 36.3.0 yesterday and got everything up and running (docker, container runtime, etc).

I installed Ollama with the script and when I ran it, it just sat there loading indefinitely (I let it try to load tinyllama for ~10 minutes). No crashes, no errors.

I don't think it loaded the libcudart in the LD_LIBRARY_PATH at /usr/local/cuda/lib64, it seemed to load the bundled library.

I'm slightly fuzzy on how CUDA handles driver versions; I was under the impression you could compile something with CUDA 11-4 and it would work on future CUDA versions; what might be happening here (and this is just speculation, I'll need to test on my end at least) is that it's compiled for 11-4, and it loads the 11-4 library, but I have 12.2 installed. Would you be able to throw in a 12-2 build from your CI pipeline for JP6 users to test?

@remy415
Copy link
Contributor

remy415 commented May 6, 2024

@dhiltgen I went ahead and built the binary myself on the JP6/36.3.0 Jetson and it worked just fine, so it's just the script-installed binary that doesn't work

@Ahmad-Bunni
Copy link

@remy415 Spot on. I was able to see what is loaded during serve and whether cuda 11 or later, it still loads the libcudart from the bundled library. My guess of why it works with cuda 11.x is just because that matches the version of the libcudart loaded during serve which is 11.1 if not mistaken.

I'm almost certain building the binary on JP6 would work for me too just because it would be 12.x. I think the main problem here is that it relies on whats bundled rather than using what's set in LD_LIBRARY_PATH

@remy415
Copy link
Contributor

remy415 commented May 6, 2024

@Ahmad-Bunni @dhiltgen I just flashed my Jetson Orin Nano with 36.3.0 yesterday and got everything up and running (docker, container runtime, etc).

I just want to clarify that it does not work out of the box, it just kinda perpetually loads and never gives me an error or crash. On the client side the dots just keep spinning.

I'm almost certain building the binary on JP6 would work for me too just because it would be 12.x. I think the main problem here is that it relies on whats bundled rather than using what's set in LD_LIBRARY_PATH

Yes, it is loading the bundled libraries — that was the original point. According to the nVidia documentation, it should work though (source: https://docs.nvidia.com/deploy/cuda-compatibility/):
image

I’ll dig around the documentation some more

@dhiltgen dhiltgen changed the title Add support for Nvidia Jetson Add binary support for Nvidia Jetson Orin - JetPack 6 May 31, 2024
@dhiltgen
Copy link
Collaborator

dhiltgen commented May 31, 2024

Based on my current understanding, to support binary releases, we'll need one distinct cuda build for each JetPack major version. I'm going to dedup and tidy up our issues to track this with 3 distinct issues.

See also #4140 and #4693

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
nvidia Issues relating to Nvidia GPUs and CUDA
Projects
None yet
Development

No branches or pull requests