-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Official arm64 build does not work on Jetson Nano Orin #3406
Comments
@gab0220 thank you for reporting this. The issue right now is the OS Jetsons run on aren’t able to use the CUDA libraries bundled by the process they use to compile the binary. We’re still trying to pinpoint the exact issue to see if there’s a way to continue using the same process with minor adjustments. You should be able to quickly build the binary on your Jetson, note that it is no longer necessary to follow the referenced tutorial, though it should still work if you compile yourself. First, set up environment variables
Ensure required tools are installed
Clone repo and build. Ensure you first
This will compile the Ollama binary for your Jetson and save it to your current directory. Remove the old Ollama binary |
I've adjusted the behavior of the system with the upcoming 0.1.32 release so that we'll load the cuda library from the LD_LIBRARY_PATH before our bundled version, which should help mitigate this. As long as you include the cuda lib dir in your LD_LIBRARY_PATH for the ollama server, it should work. Ultimately I'd still like to get an older glibc based build setup defined that has a cuda library that works on Jetson, so I'll keep this issue open for now. |
Hi , thanks again all for your work I am trying to compile new version and getting always the same error : Also trying to install the bundled version directly including LD_LIBRARY_PATH and it runs but it does not load the models |
@CesarCalvoCobo are you setting |
@CesarCalvoCobo Okay my PR got merged so you should be able to just pull the latest ollama repo and run the compile again |
Thank you so much @remy415 - I compiled it succesfully now |
@dhiltgen yea everything is working well as of a couple weeks ago |
Sounds like we can close this as resolved. Please speak up if you have any lingering issues on Jetsons. |
What is the issue?
Hello everyone, thank you for your work.
I'm using a Jetson Nano Orin. Following #3098, some days ago I done a
git checkout
using #2279 commit and install this version on my device. It works.Today I tried to:
ollama list
ollama pull <model>
OLLAMA_DEBUG="1" ollama run <model>
Output:
I also attach the output of
journalctl -u ollama
:What did you expect to see?
So the I can't use model.
Steps to reproduce
No response
Are there any recent changes that introduced the issue?
No response
OS
Linux
Architecture
Other
Platform
No response
Ollama version
v0.1.30
GPU
Nvidia
GPU info
No response
CPU
No response
Other software
No response
The text was updated successfully, but these errors were encountered: