diff --git a/docs/docs/concepts/fleets.md b/docs/docs/concepts/fleets.md index 5b72804d86..52894d2d52 100644 --- a/docs/docs/concepts/fleets.md +++ b/docs/docs/concepts/fleets.md @@ -91,9 +91,17 @@ are both acceptable). !!! info "Requirements" Hosts should be pre-installed with Docker. - Systems with NVIDIA GPUs should also be pre-installed with CUDA 12.1 and - [NVIDIA Container Toolkit :material-arrow-top-right-thin:{ .external }](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html). - The user should have `sudo` access. + + === "NVIDIA" + Systems with NVIDIA GPUs should also be pre-installed with CUDA 12.1 and + [NVIDIA Container Toolkit :material-arrow-top-right-thin:{ .external }](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html). + + === "AMD" + Systems with AMD GPUs should also be pre-installed with AMDGPU-DKMS kernel driver (e.g. via + [native package manager :material-arrow-top-right-thin:{ .external }](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/native-install/index.html) + or [AMDGPU installer :material-arrow-top-right-thin:{ .external }](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/amdgpu-install.html).) + + The user should have passwordless `sudo` access. ??? info "Environment variables" For SSH fleets, it's possible to pre-configure environment variables. diff --git a/examples/accelerators/amd/README.md b/examples/accelerators/amd/README.md index 5eb24f41e9..874852e726 100644 --- a/examples/accelerators/amd/README.md +++ b/examples/accelerators/amd/README.md @@ -1,9 +1,7 @@ # AMD -Since [0.18.11 :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/releases/0.18.11rc1){:target="_blank"}, -you can specify an AMD GPU under `resources`. Below are a few examples. - -> AMD accelerators are currently supported only with the [`runpod`](https://dstack.ai/docs/reference/server/config.yml/#runpod) backend. +If you're using the `runpod` backend or have set up an [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets) +with on-prem AMD GPUs, you can use AMD GPUs. ## Deployment