Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 11 additions & 3 deletions docs/docs/concepts/fleets.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,9 +91,17 @@ are both acceptable).

!!! info "Requirements"
Hosts should be pre-installed with Docker.
Systems with NVIDIA GPUs should also be pre-installed with CUDA 12.1 and
[NVIDIA Container Toolkit :material-arrow-top-right-thin:{ .external }](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
The user should have `sudo` access.

=== "NVIDIA"
Systems with NVIDIA GPUs should also be pre-installed with CUDA 12.1 and
[NVIDIA Container Toolkit :material-arrow-top-right-thin:{ .external }](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).

=== "AMD"
Systems with AMD GPUs should also be pre-installed with AMDGPU-DKMS kernel driver (e.g. via
[native package manager :material-arrow-top-right-thin:{ .external }](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/native-install/index.html)
or [AMDGPU installer :material-arrow-top-right-thin:{ .external }](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/amdgpu-install.html).)

The user should have passwordless `sudo` access.

??? info "Environment variables"
For SSH fleets, it's possible to pre-configure environment variables.
Expand Down
6 changes: 2 additions & 4 deletions examples/accelerators/amd/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
# AMD

Since [0.18.11 :material-arrow-top-right-thin:{ .external }](https://github.com/dstackai/dstack/releases/0.18.11rc1){:target="_blank"},
you can specify an AMD GPU under `resources`. Below are a few examples.

> AMD accelerators are currently supported only with the [`runpod`](https://dstack.ai/docs/reference/server/config.yml/#runpod) backend.
If you're using the `runpod` backend or have set up an [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets)
with on-prem AMD GPUs, you can use AMD GPUs.

## Deployment

Expand Down