-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA error: out of memory - other VRAM consumers not detected in available memory #3765
Comments
Hey there, I seem to be encountering a similar issue with the command-r function. Every time I invoke it, I experience a loading process followed by the following error message:
Additionally, I've noticed the following log entries:
It's worth noting that I can successfully run command-r-plus, which is three times larger, without any issues. My system specs are as follows: Linux OS Furthermore, here's the nvcc output.
|
I had this issue last week when loading some extra large models. It seems that when ollama is offloading layers to gpu it does not consider the vram for other stuff like the KV cache or there is a bug that makes it calulate incorrectly?
You need more vram. So the way I solve this issue is that manually change the model file and offload less layer to the gpu. (To free up some vram) |
Thanks a lot for the response and the workaround solution. I'll definitely give it a shot. |
probably you can reduce the n_batch to 256, or smaller to get rid of this OOM error |
I edited the modelfile and tried different settings for |
All my problems seem to disappear when I limit the amount of VRAM for ollama. My 7900 XT has 20GB, so I have limited it now to 18GB like so using
Now all models work. Also, previously my graphics card started to stutter in the KDE UI when a large model is loaded, that's now gone as well! I'm now a happy user again :) => I think the max VRAM should be a bit more limited by default |
Can confirm, this is the only solution I've gotten to work. |
We've made various fixes/improvements in the memory prediction algorithm recently. Please give the latest RC of 0.1.33 a try and let us know how it goes. |
I've removed my curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION="0.1.33-rc7" sh Unfortunately there's not much differnce for me. |
@martinus can you share your server log so we can see where the calculations went wrong? |
Sure, I ran it 3 times, first run it gave me an error quite early and the CLI just hang, the other 2 runs looked the same and gave me this error in the CLI:
Output gathered from |
Would it be possible to try again on 0.1.34? A few improvements went in for memory allocation, especially with larger models such as Llama 3 70b. Let me know if that doesn't fix it and we can re-open the issue! |
I gave it a try, and get the same crashes. But I discovered something: I usually have the Steam client running in the background, and when I close it, I did not look at ollama's code, but both when Steam is running or not the logs concerning the memory allocation look exactly the same:
So it seems to me that ollama doesn't take other applications into account that might already use GPU memory? I stopped ollama, and had a look with nvtop about the GPU's memory usage:
EDIT:I see these logs:
which looks like the used memory detection does not work, both total and available show the same number I set
|
We're calculating available VRAM based on the linux kernel reporting from the amd driver. From what you describe, it sounds like the upstream kernel driver may not report this accurately compared to the latest amd specific driver. Is it possible for you to try to upgrade to the amd specific driver? https://github.com/ollama/ollama/blob/main/docs/linux.md#amd-radeon-gpu-support (if this turns out to be the the root cause, we'll need to update our docs to warn users that running other GPU apps can cause problems with the upstream driver) |
I have Fedora 40, and as far as I know have all ROCm libraries installed. I can run
I ran
And these seem to contain the correct numbers |
Thanks @martinus! It does look like that path may prove more reliable than I'll need to restructure a bit of the amd discovery code, but I'll try to get a fix up in the next release. |
Hi! It's the same for nvidia graphic cards on windows. I can confirm both on rtx 4090 laptop and rtx 3060 laptop other VRAM consumers are not detected at all so available VRAM is higher than it should be. If the usage of other consumer is high ollama_llama_server.exe will freeze. Temporary workaround is setting num_gpu to lower number of layers. |
@JemsonDev I'll be closing this issue out with the amd linux fix. I believe you're probably hitting #4599 which is a different issue related to how windows does VRAM paging. |
What is the issue?
When I try the llama3 model I get out of memory errors. I have 64GB of RAM and 24GB on the GPU.
journalctl -u ollama.service -f
showsSometimes I get past this, then it fails a few lines later. It then shows a stacktrace, if that helps:
OS
Linux
GPU
AMD
CPU
AMD
Ollama version
0.1.32
The text was updated successfully, but these errors were encountered: