Skip to content
likelovewant edited this page Aug 8, 2024 · 24 revisions

Welcome to the ollama-for-amd wiki!

This wiki aims to extend support for AMD GPUs that Ollama Official doesn't currently cover due to limitations in official ROCm on Windows. The goal is to remove these GPU limitations and include support for more AMD graphics card models. Remember, this help is provided voluntarily by community.

(For those who prefer another language, right-click and use the auto-translate feature in your browser or device. Most platforms have this built-in!.)

Need Help?

If you run into any issues, please share your server logs in the issue section. This helps us understand the problem and provide better assistance. Thanks for being part of this community effort! or seeking help from Ollama issues section, "If you suspect the issue originates from the Ollama program itself, please hold off on further troubleshooting for now. ollama teams are working on updates to address this problem. Keep an eye out for a new release!"

Tips Compilation Not Required? Try the Demo!

If you're not comfortable compiling or build the software yourself, no problem! You can skip the development steps and jump straight to using the Demo Release Version.

To use the demo, simply replace the rocblas files in the Rocblas Support Section with those provided in the demo release.

Development Guide

Follow the guide on development.md for Windows development setup.

  • Important: Make sure you have Strawberry Perl installed. If you encounter a build error like "ninja: build stopped: subcommand failed", try moving the Strawberry Perl environment path before the MinGW-w64 path.( Thanks ByronLeeeee for this tips)

Downloading & Editing

  1. Clone the Ollama Repository:
    git clone https://github.com/ollama/ollama.git

or simpley cone this repo.

  1. Edit the gen_windows.ps1 File:

    "gfx906:xnack-" "gfx1030" "gfx1100""gfx1101" "gfx1102"
    
    • Add your desired GPU models to this list (see examples in the original text). Example Additional GPUs for This Repo:

"gfx803","gfx900", "gfx902","gfx908:xnack-", "gfx90a:xnack+", "gfx90a:xnack-", "gfx940", "gfx941", "gfx942", "gfx90c:xnack-","gfx1010:xnack-", "gfx1011", "gfx1012:xnack-", "gfx1031", "gfx1032", "gfx1034", "gfx1035", "gfx1036", "gfx1103"

ROCm Support

  • Ensure you have a compatible Hip SDK installed. Install HIP SDK 5.7 from hip-sdk.
  • Important: Currently, HIP SDK 6.1.2 is not supported. Support for this version will be added if there are significant speed benefits.

Official AMD ROCm Supported GPUs: You can find a complete list of officially supported GPUs here: rocm.docs.amd.com

For supported GPUs, please use the official ROCm version from AMD also the Ollama Official. You don't need to build anything unless you want to learn how. (Examples: gfx906, gfx1030, gfx1100, gfx1101, gfx1102)

If your GPU is not on the list:

  • Try pre-built ROCm libraries:
    • Some pre-built rocblas libraries are available for specific GPUs at: ROCmLibs for HIP SDK 5.7
    • Choose the appropriate library based on your GPU model (e.g., gfx902, gfx1032).

Building ROCm libraries:

Installing and Replacing Libraries:

  1. Install HIP SDK: This will create a C:\Program Files\AMD\ROCm\5.7\bin folder.
  2. Place rocblas.dll in the C:\Program Files\AMD\ROCm\5.7\bin folder. Replace any existing rocblas.dll.
  3. Replace library within rocblas\library: This ensures your code uses the correct library files.

Community Support:

  • If you encounter issues or need assistance in rocblas and library, please leave a message on the project's issues page for community support.

Building the ollama.exe:**

$env:CGO_ENABLED="1"
go generate ./...
go build .

After build.

you can start the server in your Ollama directory:

./ollama serve

Finally, in a separate shell, run a model(eg):

./ollama run llama3

Building the Installer:**

  • Install Inno Setup: To build an installer for Ollama, you'll need to install Inno Setup: https://jrsoftware.org/isinfo.php.
  • Run PowerShell Script: In the top directory of the Ollama repository, run this PowerShell script:
    $env:CGO_ENABLED="1"
    powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1
    
  • The Installer: After the build is complete, you'll find the OllamaSetup.exe installer in the dist folder. This will work exactly like the official release.

Demo Release Version:

  • Test Drive Ollama: If you want to test before building from source, download and install a demo release from ollama-for-amd/releases.
  • Replace Files: As instructed in the ROCm supprot, replace files in your Ollama program folder with the rocblas.dll and library folder from the demo release that matches your GPU architecture.(eg. the file in (C:\Users\usrname\AppData\Local\Programs\Ollama\rocm) this report will not update regulary ,serve an example only .) Try pre-built ROCm libraries:
    • Some pre-built rocblas libraries are available for specific GPUs at: ROCmLibs for HIP SDK 5.7
    • Choose the appropriate library based on your GPU model (e.g., gfx902, gfx1032).

Important Notes:

gfx1103 (AMD 780M) is natively supported by this repository, so no file replacements are needed.

gfx803 hip memory error fixed by this libs rocm.gfx803.optic.test.version.7z

gfx90c:xnack-, gfx1010:xnack-, gfx1012:xnack- and Similar Architectures:

  • Start by trying the pre-built rocblas and library files for your GPU.

  • If you encounter issues, try setting the environment variable HSA_OVERRIDE_GFX_VERSION=9.0.12(for gfx90c),HSA_OVERRIDE_GFX_VERSION=10.1.2( gfx1012) . Refer to the Windows documentation for guidance on setting environment variables: https://www.computerhope.com/issues/ch000549.htm.

  • Running Your Models: After setting up ROCm and Ollama, run your models in a separate shell using the command: ./ollama run [model name], replacing "[model name]" with the actual name of your model.

and start the server in C:\Users\usrname\AppData\Local\Programs\Ollama\:

./ollama serve

Finally, in a separate shell, run a model(eg):

./ollama run llama3
  • Update Carefully: If you are using a demo release, DO NOT click the "Update" button if Ollama prompts you. Download updates manually from this release page: ollama-for-amd/releases