Skip to content

Commit

Permalink
Add WSL CUDA to GPU section (#105)
Browse files Browse the repository at this point in the history
  • Loading branch information
chand1012 committed Dec 16, 2023
1 parent 47d8a8f commit 658b18a
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,10 @@ On Apple Silicon, everything should just work if Xcode is installed.

On Windows, GPU should just work so long as (1) you're using our release
binaries, and (2) you pass the `-ngl 35` flag. You also need an NVIDIA
graphics card that supports CUDA. There's no support yet for AMD GPUs.
graphics card that supports CUDA. There's no support yet for AMD GPUs.
You can also use CUDA via WSL by enabling [Nvidia CUDA on WSL](https://learn.microsoft.com/en-us/windows/ai/directml/gpu-cuda-in-wsl)
and running your llamafiles inside of WSL. This will also allow you to use
llamafiles greater than 4GB on Windows.

On Linux, Nvidia cuBLAS GPU support will be compiled on the fly if (1)
you have the `cc` compiler installed, (2) you pass the `-ngl 35` flag to
Expand Down

0 comments on commit 658b18a

Please sign in to comment.