Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA error: out of memory - other VRAM consumers not detected in available memory #3765

Open
martinus opened this issue Apr 19, 2024 · 18 comments · May be fixed by #4441
Open

CUDA error: out of memory - other VRAM consumers not detected in available memory #3765

martinus opened this issue Apr 19, 2024 · 18 comments · May be fixed by #4441
Assignees
Labels
amd Issues relating to AMD GPUs and ROCm bug Something isn't working linux

Comments

@martinus
Copy link

What is the issue?

When I try the llama3 model I get out of memory errors. I have 64GB of RAM and 24GB on the GPU.

❯ ollama run llama3:70b-instruct-q2_K  --verbose "write a constexpr GCD that is not recursive in C++17"
Error: an unknown error was encountered while running the model CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233
  hipMalloc((void **) &ptr, look_ahead_size)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"

journalctl -u ollama.service -f shows

Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: n_ctx      = 2048
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: n_batch    = 512
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: n_ubatch   = 512
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: freq_base  = 500000.0
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: freq_scale = 1
Apr 19 22:43:30 box ollama[641298]: llama_kv_cache_init:      ROCm0 KV buffer size =   488.00 MiB
Apr 19 22:43:30 box ollama[641298]: llama_kv_cache_init:  ROCm_Host KV buffer size =   152.00 MiB
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model:  ROCm_Host  output buffer size =     0.52 MiB
Apr 19 22:43:30 box ollama[641298]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1088.45 MiB on device 0: cudaMalloc failed: out of memory
Apr 19 22:43:30 box ollama[641298]: ggml_gallocr_reserve_n: failed to allocate ROCm0 buffer of size 1141325824
Apr 19 22:43:30 box ollama[641298]: llama_new_context_with_model: failed to allocate compute buffers

Sometimes I get past this, then it fails a few lines later. It then shows a stacktrace, if that helps:

Apr 19 22:44:49 box ollama[691627]: 0x00007f17e17d9fa3 in wait4 () from /lib64/libc.so.6
Apr 19 22:44:49 box ollama[691627]: #0  0x00007f17e17d9fa3 in wait4 () from /lib64/libc.so.6
Apr 19 22:44:49 box ollama[691627]: #1  0x00000000024e8084 in ggml_cuda_error(char const*, char const*, char const*, int, char const*) ()
Apr 19 22:44:49 box ollama[691627]: #2  0x00000000024fc062 in ggml_cuda_pool_leg::alloc(unsigned long, unsigned long*) ()
Apr 19 22:44:49 box ollama[691627]: #3  0x00000000024fc790 in ggml_cuda_pool_alloc<__half>::alloc(unsigned long) ()
Apr 19 22:44:49 box ollama[691627]: #4  0x00000000024f2ccf in ggml_cuda_mul_mat(ggml_backend_cuda_context&, ggml_tensor const*, ggml_tensor const*, ggml_tensor*) ()
Apr 19 22:44:49 box ollama[691627]: #5  0x00000000024ebae3 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) ()
Apr 19 22:44:49 box ollama[691627]: #6  0x00000000024b3888 in ggml_backend_sched_graph_compute_async ()
Apr 19 22:44:49 box ollama[691627]: #7  0x00000000023d2819 in llama_decode ()
Apr 19 22:44:49 box ollama[691627]: #8  0x00000000022df081 in llama_server_context::update_slots() ()
Apr 19 22:44:49 box ollama[691627]: #9  0x00000000022e10ba in llama_server_queue::start_loop() ()
Apr 19 22:44:49 box ollama[691627]: #10 0x00000000022c4e02 in main ()
Apr 19 22:44:49 box ollama[691627]: [Inferior 1 (process 691260) detached]

OS

Linux

GPU

AMD

CPU

AMD

Ollama version

0.1.32

@martinus martinus added the bug Something isn't working label Apr 19, 2024
@sathamneh
Copy link

Hey there,

I seem to be encountering a similar issue with the command-r function. Every time I invoke it, I experience a loading process followed by the following error message:

ollama run command-r
Error: llama runner process no longer running: 1 error:failed to create context with model '/usr/share/ollama/.ollama/models/blobs/sha256-8a9611e7bca168be635d39d21927d2b8e7e8ea0b5d0998b7d5980daf1f8d4205'

Additionally, I've noticed the following log entries:

Apr 21 23:57:39 132-145-209-225 ollama[26073]: .......................................................................................
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_new_context_with_model: n_ctx      = 2048
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_new_context_with_model: n_batch    = 512
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_new_context_with_model: n_ubatch   = 512
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_new_context_with_model: freq_base  = 8000000.0
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_new_context_with_model: freq_scale = 1
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_kv_cache_init:      CUDA0 KV buffer size =  2560.00 MiB
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_new_context_with_model: KV self size  = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_new_context_with_model:  CUDA_Host  output buffer size =     1.01 MiB
Apr 21 23:57:39 132-145-209-225 ollama[26073]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 516.00 MiB on device 0: cudaMalloc failed: out of memory
Apr 21 23:57:39 132-145-209-225 ollama[26073]: ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 541065216
Apr 21 23:57:39 132-145-209-225 ollama[26073]: llama_new_context_with_model: failed to allocate compute buffers

It's worth noting that I can successfully run command-r-plus, which is three times larger, without any issues.

My system specs are as follows:

Linux OS
NVIDIA A10 GPU
192GB of RAM

Furthermore, here's the nvcc output.

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:02:13_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0

@ChihayaK
Copy link

I had this issue last week when loading some extra large models. It seems that when ollama is offloading layers to gpu it does not consider the vram for other stuff like the KV cache or there is a bug that makes it calulate incorrectly?
As the error in the log shows:

Apr 19 22:43:30 box ollama[641298]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1088.45 MiB on device 0: cudaMalloc failed: out of memory

Apr 21 23:57:39 132-145-209-225 ollama[26073]: ggml_backend_cuda_buffer_type_alloc_buffer: allocating 516.00 MiB on device 0: cudaMalloc failed: out of memory

You need more vram. So the way I solve this issue is that manually change the model file and offload less layer to the gpu. (To free up some vram)
To change the model file, see the issue here.

@sathamneh
Copy link

Thanks a lot for the response and the workaround solution. I'll definitely give it a shot.

@shuson
Copy link

shuson commented Apr 24, 2024

probably you can reduce the n_batch to 256, or smaller to get rid of this OOM error

@martinus
Copy link
Author

martinus commented Apr 26, 2024

probably you can reduce the n_batch to 256, or smaller to get rid of this OOM error

I edited the modelfile and tried different settings for num_batch, and PARAMETER num_batch 16 seems to work. Anything above crashes.

@martinus
Copy link
Author

martinus commented Apr 26, 2024

All my problems seem to disappear when I limit the amount of VRAM for ollama. My 7900 XT has 20GB, so I have limited it now to 18GB like so using sudo systemctl edit ollama.service:

[Service]
Environment="OLLAMA_MAX_VRAM=19327352832"

Now all models work. Also, previously my graphics card started to stutter in the KDE UI when a large model is loaded, that's now gone as well! I'm now a happy user again :)

=> I think the max VRAM should be a bit more limited by default

@erasmus74
Copy link

All my problems seem to disappear when I limit the amount of memory for ollama. My 7900 XT has 20GB, so I have limited it now to 18GB like so using sudo systemctl edit ollama.service:

[Service]
Environment="OLLAMA_MAX_VRAM=19327352832"

Now all models work. Also, previously my graphics card started to stutter in the KDE UI when a large model is loaded, that's now gone as well! I'm now a happy user again :)

=> I think the the max VRAM should be a bit more limited by default

Can confirm, this is the only solution I've gotten to work.

@dhiltgen
Copy link
Collaborator

dhiltgen commented May 2, 2024

We've made various fixes/improvements in the memory prediction algorithm recently. Please give the latest RC of 0.1.33 a try and let us know how it goes.

https://github.com/ollama/ollama/releases

@martinus
Copy link
Author

martinus commented May 2, 2024

I've removed my OLLAMA_MAX_VRAM setting, then downloaded the latest RC with

curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION="0.1.33-rc7" sh

Unfortunately there's not much differnce for me. llama3:70b-instruct crashes with CUDA OOM, and e.g. nous-hermes2-mixtral works but uses so much memory that the desktop environment becomes unusable (even when ollama is idle, the only way to get rid of the stuttering is to unload the model)

@dhiltgen
Copy link
Collaborator

dhiltgen commented May 2, 2024

@martinus can you share your server log so we can see where the calculations went wrong?

@martinus
Copy link
Author

martinus commented May 2, 2024

Sure, I ran it 3 times, first run it gave me an error quite early and the CLI just hang, the other 2 runs looked the same and gave me this error in the CLI:

❯ ollama run llama3:70b-instruct "tell me 10 important rules about software development"
Error: an unknown error was encountered while running the model CUDA error: out of memory
  current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:233
  hipMalloc((void **) &ptr, look_ahead_size)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"

Output gathered from sudo journalctl -u ollama.service:

@jmorganca
Copy link
Member

Would it be possible to try again on 0.1.34? A few improvements went in for memory allocation, especially with larger models such as Llama 3 70b. Let me know if that doesn't fix it and we can re-open the issue!

@martinus
Copy link
Author

martinus commented May 10, 2024

I gave it a try, and get the same crashes. But I discovered something: I usually have the Steam client running in the background, and when I close it, ollama run llama3:70b-instruct consistently works (other big models still fail though). With it running, ollama consistently crashes with allocation error. So ist seems to me that Steam has already allocated some GPU memory and ollama isn't aware of that.

I did not look at ollama's code, but both when Steam is running or not the logs concerning the memory allocation look exactly the same:

May 10 07:52:21 box ollama[7395]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
May 10 07:52:21 box ollama[7395]: ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
May 10 07:52:21 box ollama[7395]: ggml_cuda_init: found 1 ROCm devices:
May 10 07:52:21 box ollama[7395]:   Device 0: AMD Radeon Graphics, compute capability 11.0, VMM: no
May 10 07:52:21 box ollama[7395]: llm_load_tensors: ggml ctx size =    0.74 MiB
May 10 07:52:31 box ollama[7395]: llm_load_tensors: offloading 39 repeating layers to GPU
May 10 07:52:31 box ollama[7395]: llm_load_tensors: offloaded 39/81 layers to GPU
May 10 07:52:31 box ollama[7395]: llm_load_tensors:      ROCm0 buffer size = 17903.44 MiB
May 10 07:52:31 box ollama[7395]: llm_load_tensors:        CPU buffer size = 38110.61 MiB
May 10 07:52:32 box ollama[7395]: ..................................................................................................
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model: n_ctx      = 2048
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model: n_batch    = 512
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model: n_ubatch   = 512
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model: freq_base  = 500000.0
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model: freq_scale = 1
May 10 07:52:32 box ollama[7395]: llama_kv_cache_init:      ROCm0 KV buffer size =   312.00 MiB
May 10 07:52:32 box ollama[7395]: llama_kv_cache_init:  ROCm_Host KV buffer size =   328.00 MiB
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model:  ROCm_Host  output buffer size =     0.52 MiB
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model:      ROCm0 compute buffer size =  1104.45 MiB
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model:  ROCm_Host compute buffer size =    20.01 MiB
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model: graph nodes  = 2566
May 10 07:52:32 box ollama[7395]: llama_new_context_with_model: graph splits = 455

So it seems to me that ollama doesn't take other applications into account that might already use GPU memory?

I stopped ollama, and had a look with nvtop about the GPU's memory usage:

  • 0.957Gi/19.984Gi without Steam
  • 1.348Gi/19.984Gi with Steam running

EDIT:I see these logs:

May 10 07:48:40 box ollama[7395]: time=2024-05-10T07:48:40.456+02:00 level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=0 total="20464.0 MiB"
May 10 07:48:40 box ollama[7395]: time=2024-05-10T07:48:40.456+02:00 level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=0 available="20464.0 MiB"

which looks like the used memory detection does not work, both total and available show the same number

I set OLLAMA_DEBUG=1 to get more logs in that area, but nothing interesting:

May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.224+02:00 level=WARN source=amd_linux.go:49 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.224+02:00 level=DEBUG source=amd_linux.go:78 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/0/properties"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.225+02:00 level=DEBUG source=amd_linux.go:102 msg="detected CPU /sys/class/kfd/kfd/topology/nodes/0/properties"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.225+02:00 level=DEBUG source=amd_linux.go:78 msg="evaluating amdgpu node /sys/class/kfd/kfd/topology/nodes/1/properties"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.225+02:00 level=INFO source=amd_linux.go:217 msg="amdgpu memory" gpu=0 total="20464.0 MiB"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.225+02:00 level=INFO source=amd_linux.go:218 msg="amdgpu memory" gpu=0 available="20464.0 MiB"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.225+02:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /opt/rocm/lib"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.225+02:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/lib64"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.227+02:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/local/bin/rocm"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.227+02:00 level=DEBUG source=amd_common.go:16 msg="evaluating potential rocm lib dir /usr/share/ollama/lib/rocm"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.228+02:00 level=DEBUG source=amd_linux.go:267 msg="rocm supported GPUs" types="[gfx1030 gfx1100 gfx1101 gfx1102 gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942]"
May 10 09:26:36 box ollama[223380]: time=2024-05-10T09:26:36.228+02:00 level=INFO source=amd_linux.go:276 msg="amdgpu is supported" gpu=0 gpu_type=gfx1100

@dhiltgen dhiltgen assigned dhiltgen and unassigned mxyng May 10, 2024
@dhiltgen dhiltgen added the amd Issues relating to AMD GPUs and ROCm label May 10, 2024
@dhiltgen dhiltgen changed the title CUDA error: out of memory CUDA error: out of memory - other VRAM consumers not detected in available memory May 10, 2024
@dhiltgen dhiltgen added the linux label May 10, 2024
@dhiltgen
Copy link
Collaborator

dhiltgen commented May 10, 2024

We're calculating available VRAM based on the linux kernel reporting from the amd driver. From what you describe, it sounds like the upstream kernel driver may not report this accurately compared to the latest amd specific driver.

Is it possible for you to try to upgrade to the amd specific driver?

https://github.com/ollama/ollama/blob/main/docs/linux.md#amd-radeon-gpu-support

(if this turns out to be the the root cause, we'll need to update our docs to warn users that running other GPU apps can cause problems with the upstream driver)

@dhiltgen dhiltgen reopened this May 10, 2024
@martinus
Copy link
Author

martinus commented May 11, 2024

I have Fedora 40, and as far as I know have all ROCm libraries installed. I can run rocm-smi --showmeminfo vram which shows the total RAM and used RAM:

============================ ROCm System Management Interface ============================
================================== Memory Usage (Bytes) ==================================
GPU[0]          : VRAM Total Memory (B): 21458059264
GPU[0]          : VRAM Total Used Memory (B): 1924907008
==========================================================================================
================================== End of ROCm SMI Log ===================================

I ran strace to see what the command is actually reading, and see that it reads these two files:

  • /sys/class/drm/card1/device/mem_info_vram_used
  • /sys/class/drm/card1/device/mem_info_vram_total

And these seem to contain the correct numbers

@dhiltgen
Copy link
Collaborator

Thanks @martinus! It does look like that path may prove more reliable than /sys/class/kfd/kfd/topology/nodes/*/mem_banks/*/used_memory

I'll need to restructure a bit of the amd discovery code, but I'll try to get a fix up in the next release.

@dhiltgen dhiltgen linked a pull request May 14, 2024 that will close this issue
@JemsonDev
Copy link

Hi! It's the same for nvidia graphic cards on windows. I can confirm both on rtx 4090 laptop and rtx 3060 laptop other VRAM consumers are not detected at all so available VRAM is higher than it should be. If the usage of other consumer is high ollama_llama_server.exe will freeze. Temporary workaround is setting num_gpu to lower number of layers.

@dhiltgen
Copy link
Collaborator

dhiltgen commented May 28, 2024

@JemsonDev I'll be closing this issue out with the amd linux fix.

I believe you're probably hitting #4599 which is a different issue related to how windows does VRAM paging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
amd Issues relating to AMD GPUs and ROCm bug Something isn't working linux
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants