Name and Version
Version: llama-b6869-bin-win-vulkan-x64 (and other recent versions at least)
When UMA is set to 64gb GPU memory, Llama-cpp sees 64GB, however when UMA is set to 96gb for iGPU, Llama-cpp can only see and use 32gb.
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
No response
Command line
Problem description & steps to reproduce
With a Ryzen 395 AI Max on windows, test the command 'llama-server --devices-list' and see how much vram it shows for Radeon 8060S.
First Bad Commit
No response
Relevant log output