From e3351c2a8c28dac321b06ffbf2ef51c22c1c4ed7 Mon Sep 17 00:00:00 2001 From: peterschmidt85 Date: Mon, 17 Nov 2025 12:54:30 +0100 Subject: [PATCH] [Docs] Fix incorrect URLs --- README.md | 2 +- docs/blog/posts/amd-on-tensorwave.md | 4 +-- ...benchmark-amd-containers-and-partitions.md | 2 +- docs/blog/posts/gh200-on-lambda.md | 4 +-- docs/blog/posts/gpu-health-checks.md | 2 +- docs/blog/posts/instance-volumes.md | 4 +-- docs/blog/posts/intel-gaudi.md | 2 +- docs/blog/posts/kubernetes-beta.md | 2 +- docs/blog/posts/nebius.md | 2 +- docs/blog/posts/prometheus.md | 2 +- docs/docs/concepts/backends.md | 4 +-- docs/docs/concepts/fleets.md | 4 +-- docs/docs/concepts/volumes.md | 12 ++++---- docs/docs/guides/clusters.md | 8 ++--- docs/docs/guides/kubernetes.md | 2 +- docs/docs/guides/metrics.md | 2 +- docs/docs/guides/protips.md | 6 ++-- docs/docs/guides/server-deployment.md | 2 +- docs/docs/guides/troubleshooting.md | 10 +++---- docs/docs/installation/index.md | 2 +- docs/layouts/custom.yml | 30 +++++++++---------- examples/accelerators/amd/README.md | 2 +- examples/accelerators/intel/README.md | 2 +- examples/accelerators/tenstorrent/README.md | 2 +- 24 files changed, 57 insertions(+), 57 deletions(-) diff --git a/README.md b/README.md index 55d93f8d0..a92f37e82 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ Backends can be set up in `~/.dstack/server/config.yml` or through the [project For more details, see [Backends](https://dstack.ai/docs/concepts/backends). -> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh) once the server is up. +> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets) once the server is up. ##### Start the server diff --git a/docs/blog/posts/amd-on-tensorwave.md b/docs/blog/posts/amd-on-tensorwave.md index 80a766b94..5a062f454 100644 --- a/docs/blog/posts/amd-on-tensorwave.md +++ b/docs/blog/posts/amd-on-tensorwave.md @@ -15,7 +15,7 @@ to orchestrate AI containers with any AI cloud vendor, whether they provide on-d In this tutorial, we’ll walk you through how `dstack` can be used with [TensorWave :material-arrow-top-right-thin:{ .external }](https://tensorwave.com/){:target="_blank"} using -[SSH fleets](../../docs/concepts/fleets.md#ssh). +[SSH fleets](../../docs/concepts/fleets.md#ssh-fleets). @@ -235,6 +235,6 @@ Want to see how it works? Check out the video below: !!! info "What's next?" - 1. See [SSH fleets](../../docs/concepts/fleets.md#ssh) + 1. See [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets) 2. Read about [dev environments](../../docs/concepts/dev-environments.md), [tasks](../../docs/concepts/tasks.md), and [services](../../docs/concepts/services.md) 3. Join [Discord :material-arrow-top-right-thin:{ .external }](https://discord.gg/u8SmfwPpMd) diff --git a/docs/blog/posts/benchmark-amd-containers-and-partitions.md b/docs/blog/posts/benchmark-amd-containers-and-partitions.md index e435f9761..cf1d8baaa 100644 --- a/docs/blog/posts/benchmark-amd-containers-and-partitions.md +++ b/docs/blog/posts/benchmark-amd-containers-and-partitions.md @@ -122,7 +122,7 @@ The full, reproducible steps are available in our GitHub repository. Below is a #### Creating a fleet -We first defined a `dstack` [SSH fleet](../../docs/concepts/fleets.md#ssh) to manage the two-node cluster. +We first defined a `dstack` [SSH fleet](../../docs/concepts/fleets.md#ssh-fleets) to manage the two-node cluster. ```yaml type: fleet diff --git a/docs/blog/posts/gh200-on-lambda.md b/docs/blog/posts/gh200-on-lambda.md index 1741e6f2e..e7831a76c 100644 --- a/docs/blog/posts/gh200-on-lambda.md +++ b/docs/blog/posts/gh200-on-lambda.md @@ -11,7 +11,7 @@ categories: # Supporting ARM and NVIDIA GH200 on Lambda The latest update to `dstack` introduces support for NVIDIA GH200 instances on [Lambda](../../docs/concepts/backends.md#lambda) -and enables ARM-powered hosts, including GH200 and GB200, with [SSH fleets](../../docs/concepts/fleets.md#ssh). +and enables ARM-powered hosts, including GH200 and GB200, with [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets). @@ -78,7 +78,7 @@ $ dstack apply -f .dstack.yml !!! info "Retry policy" Note, if GH200s are not available at the moment, you can specify the [retry policy](../../docs/concepts/dev-environments.md#retry-policy) in your run configuration so that `dstack` can run the configuration once the GPU becomes available. -> If you have GH200 or GB200-powered hosts already provisioned via Lambda, another cloud provider, or on-prem, you can now use them with [SSH fleets](../../docs/concepts/fleets.md#ssh). +> If you have GH200 or GB200-powered hosts already provisioned via Lambda, another cloud provider, or on-prem, you can now use them with [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets). !!! info "What's next?" 1. Sign up with [Lambda :material-arrow-top-right-thin:{ .external }](https://cloud.lambda.ai/sign-up?_gl=1*1qovk06*_gcl_au*MTg2MDc3OTAyOS4xNzQyOTA3Nzc0LjE3NDkwNTYzNTYuMTc0NTQxOTE2MS4xNzQ1NDE5MTYw*_ga*MTE2NDM5MzI0My4xNzQyOTA3Nzc0*_ga_43EZT1FM6Q*czE3NDY3MTczOTYkbzM0JGcxJHQxNzQ2NzE4MDU2JGo1NyRsMCRoMTU0Mzg1NTU1OQ..){:target="_blank"} diff --git a/docs/blog/posts/gpu-health-checks.md b/docs/blog/posts/gpu-health-checks.md index fdd5b75e6..1571935b6 100644 --- a/docs/blog/posts/gpu-health-checks.md +++ b/docs/blog/posts/gpu-health-checks.md @@ -55,7 +55,7 @@ For active checks today, you can run [NCCL tests](../../examples/clusters/nccl-t ## Supported backends -Passive GPU health checks work on AWS (except with custom `os_images`), Azure (except A10 GPUs), GCP, OCI, and [SSH fleets](../../docs/concepts/fleets.md#ssh) where DCGM is installed and configured for background checks. +Passive GPU health checks work on AWS (except with custom `os_images`), Azure (except A10 GPUs), GCP, OCI, and [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets) where DCGM is installed and configured for background checks. > Fleets created before version 0.19.22 need to be recreated to enable this feature. diff --git a/docs/blog/posts/instance-volumes.md b/docs/blog/posts/instance-volumes.md index 95b48cee1..07b82012c 100644 --- a/docs/blog/posts/instance-volumes.md +++ b/docs/blog/posts/instance-volumes.md @@ -41,8 +41,8 @@ resources: -> Instance volumes work with both [SSH fleets](../../docs/concepts/fleets.md#ssh) -> and [cloud fleets](../../docs/concepts/fleets.md#cloud), and it is possible to mount any folders on the instance, +> Instance volumes work with both [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets) +> and [cloud fleets](../../docs/concepts/fleets.md#backend-fleets), and it is possible to mount any folders on the instance, > whether they are regular folders or NFS share mounts. The configuration above mounts `/root/.dstack/cache` on the instance to `/root/.cache` inside container. diff --git a/docs/blog/posts/intel-gaudi.md b/docs/blog/posts/intel-gaudi.md index 6f95f49d0..887ae32a6 100644 --- a/docs/blog/posts/intel-gaudi.md +++ b/docs/blog/posts/intel-gaudi.md @@ -44,7 +44,7 @@ machines equipped with Intel Gaudi accelerators. ## Create a fleet To manage container workloads on on-prem machines with Intel Gaudi accelerators, start by configuring an -[SSH fleet](../../docs/concepts/fleets.md#ssh). Here’s an example configuration for your fleet: +[SSH fleet](../../docs/concepts/fleets.md#ssh-fleets). Here’s an example configuration for your fleet:
diff --git a/docs/blog/posts/kubernetes-beta.md b/docs/blog/posts/kubernetes-beta.md index 71615165a..cc2529e7f 100644 --- a/docs/blog/posts/kubernetes-beta.md +++ b/docs/blog/posts/kubernetes-beta.md @@ -299,7 +299,7 @@ VM-based backends also offer more granular control over cluster provisioning. ### SSH fleets vs Kubernetes backend -If you’re using on-prem servers and Kubernetes isn’t a requirement, [SSH fleets](../../docs/concepts/fleets.md#ssh) may be simpler. +If you’re using on-prem servers and Kubernetes isn’t a requirement, [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets) may be simpler. They provide a lightweight and flexible alternative. ### AMD GPUs diff --git a/docs/blog/posts/nebius.md b/docs/blog/posts/nebius.md index ef484b0f3..d24681959 100644 --- a/docs/blog/posts/nebius.md +++ b/docs/blog/posts/nebius.md @@ -103,7 +103,7 @@ $ dstack apply -f .dstack.yml The new `nebius` backend supports CPU and GPU instances, [fleets](../../docs/concepts/fleets.md), [distributed tasks](../../docs/concepts/tasks.md#distributed-tasks), and more. -> Support for [network volumes](../../docs/concepts/volumes.md#network) and accelerated cluster +> Support for [network volumes](../../docs/concepts/volumes.md#network-volumes) and accelerated cluster interconnects is coming soon. !!! info "What's next?" diff --git a/docs/blog/posts/prometheus.md b/docs/blog/posts/prometheus.md index 5482d0c13..2594619c0 100644 --- a/docs/blog/posts/prometheus.md +++ b/docs/blog/posts/prometheus.md @@ -49,7 +49,7 @@ For a full list of available metrics and labels, check out [Metrics](../../docs/ ??? info "NVIDIA" NVIDIA DCGM metrics are automatically collected for `aws`, `azure`, `gcp`, and `oci` backends, - as well as for [SSH fleets](../../docs/concepts/fleets.md#ssh). + as well as for [SSH fleets](../../docs/concepts/fleets.md#ssh-fleets). To ensure NVIDIA DCGM metrics are collected from SSH fleets, ensure the `datacenter-gpu-manager-4-core`, `datacenter-gpu-manager-4-proprietary`, and `datacenter-gpu-manager-exporter` packages are installed on the hosts. diff --git a/docs/docs/concepts/backends.md b/docs/docs/concepts/backends.md index dfd25df24..c89875486 100644 --- a/docs/docs/concepts/backends.md +++ b/docs/docs/concepts/backends.md @@ -1157,12 +1157,12 @@ Also, the `vastai` backend supports on-demand instances only. Spot instance supp ## On-prem In on-prem environments, the [Kubernetes](#kubernetes) backend can be used if a Kubernetes cluster is already set up and configured. -However, often [SSH fleets](../concepts/fleets.md#ssh) are a simpler and lighter alternative. +However, often [SSH fleets](../concepts/fleets.md#ssh-fleets) are a simpler and lighter alternative. ### SSH fleets SSH fleets require no backend configuration. -All you need to do is [provide hostnames and SSH credentials](../concepts/fleets.md#ssh), and `dstack` sets up a fleet that can orchestrate container-based runs on your servers. +All you need to do is [provide hostnames and SSH credentials](../concepts/fleets.md#ssh-fleets), and `dstack` sets up a fleet that can orchestrate container-based runs on your servers. SSH fleets support the same features as [VM-based](#vm-based) backends. diff --git a/docs/docs/concepts/fleets.md b/docs/docs/concepts/fleets.md index c23397327..33824746d 100644 --- a/docs/docs/concepts/fleets.md +++ b/docs/docs/concepts/fleets.md @@ -9,7 +9,7 @@ Fleets act both as pools of instances and as templates for how those instances a When you run `dstack apply` to start a dev environment, task, or service, `dstack` will reuse idle instances from an existing fleet whenever available. -## Backend fleets { #backend-fleets } +## Backend fleets If you configured [backends](backends.md), `dstack` can provision fleets on the fly. However, it’s recommended to define fleets explicitly. @@ -269,7 +269,7 @@ retry: [`max_price`](../reference/dstack.yml/fleet.md#max_price), and among [others](../reference/dstack.yml/fleet.md). -## SSH fleets { #ssh-fleets } +## SSH fleets If you have a group of on-prem servers accessible via SSH, you can create an SSH fleet. diff --git a/docs/docs/concepts/volumes.md b/docs/docs/concepts/volumes.md index 8581f2f7a..fa5d73c71 100644 --- a/docs/docs/concepts/volumes.md +++ b/docs/docs/concepts/volumes.md @@ -4,12 +4,12 @@ Volumes enable data persistence between runs of dev environments, tasks, and ser `dstack` supports two kinds of volumes: -* [Network volumes](#network) — provisioned via backends and mounted to specific container directories. +* [Network volumes](#network-volumes) — provisioned via backends and mounted to specific container directories. Ideal for persistent storage. -* [Instance volumes](#instance) — bind directories on the host instance to container directories. +* [Instance volumes](#instance-volumes) — bind directories on the host instance to container directories. Useful as a cache for cloud fleets or for persistent storage with SSH fleets. -## Network volumes { #network } +## Network volumes Network volumes are currently supported for the `aws`, `gcp`, and `runpod` backends. @@ -222,7 +222,7 @@ If you've registered an existing volume, it will be de-registered with `dstack` ??? info "Can I attach network volumes to multiple runs or instances?" You can mount a volume in multiple runs. This feature is currently supported only by the `runpod` backend. -## Instance volumes { #instance } +## Instance volumes Instance volumes allow mapping any directory on the instance where the run is executed to any path inside the container. This means that the data in instance volumes is persisted only if the run is executed on the same instance. @@ -257,7 +257,7 @@ Since persistence isn't guaranteed (instances may be interrupted or runs may occ volumes only for caching or with directories manually mounted to network storage. !!! info "Backends" - Instance volumes are currently supported for all backends except `runpod`, `vastai` and `kubernetes`, and can also be used with [SSH fleets](fleets.md#ssh). + Instance volumes are currently supported for all backends except `runpod`, `vastai` and `kubernetes`, and can also be used with [SSH fleets](fleets.md#ssh-fleets). ??? info "Optional volumes" If the volume is not critical for your workload, you can mark it as `optional`. @@ -297,7 +297,7 @@ volumes: ### Use instance volumes with SSH fleets -If you control the instances (e.g. they are on-prem servers configured via [SSH fleets](fleets.md#ssh)), +If you control the instances (e.g. they are on-prem servers configured via [SSH fleets](fleets.md#ssh-fleets)), you can mount network storage (e.g., NFS or SMB) and use the mount points as instance volumes. For example, if you mount a network storage to `/mnt/nfs-storage` on all hosts of your SSH fleet, diff --git a/docs/docs/guides/clusters.md b/docs/docs/guides/clusters.md index 3d0935107..f77bc44a8 100644 --- a/docs/docs/guides/clusters.md +++ b/docs/docs/guides/clusters.md @@ -8,13 +8,13 @@ Ensure a fleet is created before you run any distributed task. This can be eithe ### SSH fleets -[SSH fleets](../concepts/fleets.md#ssh) can be used to create a fleet out of existing baremetals or VMs, e.g. if they are already pre-provisioned, or set up on-premises. +[SSH fleets](../concepts/fleets.md#ssh-fleets) can be used to create a fleet out of existing baremetals or VMs, e.g. if they are already pre-provisioned, or set up on-premises. > For SSH fleets, fast interconnect is supported provided that the hosts are pre-configured with the appropriate interconnect drivers. ### Cloud fleets -[Cloud fleets](../concepts/fleets.md#cloud) allow to provision interconnected clusters across supported backends. +[Cloud fleets](../concepts/fleets.md#backend-fleets) allow to provision interconnected clusters across supported backends. For cloud fleets, fast interconnect is currently supported only on the `aws`, `gcp`, `nebius`, and `runpod` backends. === "AWS" @@ -68,7 +68,7 @@ To test the interconnect of a created fleet, ensure you run [NCCL](../../example ### Instance volumes -[Instance volumes](../concepts/volumes.md#instance) enable mounting any folder from the host into the container, allowing data persistence during distributed tasks. +[Instance volumes](../concepts/volumes.md#instance-volumes) enable mounting any folder from the host into the container, allowing data persistence during distributed tasks. Instance volumes can be used to mount: @@ -77,7 +77,7 @@ Instance volumes can be used to mount: ### Network volumes -Currently, no backend supports multi-attach [network volumes](../concepts/volumes.md#network) for distributed tasks. However, single-attach volumes can be used by leveraging volume name [interpolation syntax](../concepts/volumes.md#distributed-tasks). This approach mounts a separate single-attach volume to each node. +Currently, no backend supports multi-attach [network volumes](../concepts/volumes.md#network-volumes) for distributed tasks. However, single-attach volumes can be used by leveraging volume name [interpolation syntax](../concepts/volumes.md#distributed-tasks). This approach mounts a separate single-attach volume to each node. !!! info "What's next?" 1. Read about [distributed tasks](../concepts/tasks.md#distributed-tasks), [fleets](../concepts/fleets.md), and [volumes](../concepts/volumes.md) diff --git a/docs/docs/guides/kubernetes.md b/docs/docs/guides/kubernetes.md index d47eb322a..5e4ded65a 100644 --- a/docs/docs/guides/kubernetes.md +++ b/docs/docs/guides/kubernetes.md @@ -111,4 +111,4 @@ For more details on clusters, see the [corresponding guide](clusters.md). If your priority is orchestrating cloud GPUs and Kubernetes isn’t a must, [VM-based](../concepts/backends.md#vm-based) backends are a better fit thanks to their native cloud integration. - For on-prem GPUs where Kubernetes is optional, [SSH fleets](../concepts/fleets.md#ssh) provide a simpler and more lightweight alternative. + For on-prem GPUs where Kubernetes is optional, [SSH fleets](../concepts/fleets.md#ssh-fleets) provide a simpler and more lightweight alternative. diff --git a/docs/docs/guides/metrics.md b/docs/docs/guides/metrics.md index 4eb5d60b7..394bb59d4 100644 --- a/docs/docs/guides/metrics.md +++ b/docs/docs/guides/metrics.md @@ -44,7 +44,7 @@ In addition to the essential metrics available via the CLI and UI, `dstack` expo ??? info "NVIDIA DCGM" NVIDIA DCGM metrics are automatically collected for `aws`, `azure`, `gcp`, and `oci` backends, - as well as for [SSH fleets](../concepts/fleets.md#ssh). + as well as for [SSH fleets](../concepts/fleets.md#ssh-fleets). To ensure NVIDIA DCGM metrics are collected from SSH fleets, ensure the `datacenter-gpu-manager-4-core`, `datacenter-gpu-manager-4-proprietary`, and `datacenter-gpu-manager-exporter` packages are installed on the hosts. diff --git a/docs/docs/guides/protips.md b/docs/docs/guides/protips.md index 039caaf20..d676ddadf 100644 --- a/docs/docs/guides/protips.md +++ b/docs/docs/guides/protips.md @@ -215,11 +215,11 @@ To change the default idle duration, set ## Volumes To persist data across runs, it is recommended to use volumes. -`dstack` supports two types of volumes: [network](../concepts/volumes.md#network) +`dstack` supports two types of volumes: [network](../concepts/volumes.md#network-volumes) (for persisting data even if the instance is interrupted) -and [instance](../concepts/volumes.md#instance) (useful for persisting cached data across runs while the instance remains active). +and [instance](../concepts/volumes.md#instance-volumes) (useful for persisting cached data across runs while the instance remains active). -> If you use [SSH fleets](../concepts/fleets.md#ssh), you can mount network storage (e.g., NFS or SMB) to the hosts and access it in runs via instance volumes. +> If you use [SSH fleets](../concepts/fleets.md#ssh-fleets), you can mount network storage (e.g., NFS or SMB) to the hosts and access it in runs via instance volumes. ## Environment variables diff --git a/docs/docs/guides/server-deployment.md b/docs/docs/guides/server-deployment.md index 4976bf060..5edf8007b 100644 --- a/docs/docs/guides/server-deployment.md +++ b/docs/docs/guides/server-deployment.md @@ -80,7 +80,7 @@ The server loads this file on startup. Alternatively, you can configure backends on the [project settings page](../concepts/projects.md#backends) via UI. > For using `dstack` with on-prem servers, no backend configuration is required. -> Use [SSH fleets](../concepts/fleets.md#ssh) instead. +> Use [SSH fleets](../concepts/fleets.md#ssh-fleets) instead. ## State persistence diff --git a/docs/docs/guides/troubleshooting.md b/docs/docs/guides/troubleshooting.md index ac4ef4bc9..581b8ce05 100644 --- a/docs/docs/guides/troubleshooting.md +++ b/docs/docs/guides/troubleshooting.md @@ -38,7 +38,7 @@ Below are some of the reasons why this might happen. #### Cause 1: No capacity providers Before you can run any workloads, you need to configure a [backend](../concepts/backends.md), -create an [SSH fleet](../concepts/fleets.md#ssh), or sign up for +create an [SSH fleet](../concepts/fleets.md#ssh-fleets), or sign up for [dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"}. If you have configured a backend and still can't use it, check the output of `dstack server` for backend configuration errors. @@ -93,7 +93,7 @@ Examples: `gpu: amd` (one AMD GPU), `gpu: A10:4..8` (4 to 8 A10 GPUs), #### Cause 6: Network volumes -If your run configuration uses [network volumes](../concepts/volumes.md#network), +If your run configuration uses [network volumes](../concepts/volumes.md#network-volumes), `dstack` will only select instances from the same backend and region as the volumes. For AWS, the availability zone of the volume and the instance should also match. @@ -102,8 +102,8 @@ For AWS, the availability zone of the volume and the instance should also match. Some `dstack` features are not supported by all backends. If your configuration uses one of these features, `dstack` will only select offers from the backends that support it. -- [Cloud fleet](../concepts/fleets.md#cloud) configurations, - [Instance volumes](../concepts/volumes.md#instance), +- [Backend fleets](../concepts/fleets.md#backend-fleets) configurations, + [Instance volumes](../concepts/volumes.md#instance-volumes), and [Privileged containers](../reference/dstack.yml/dev-environment.md#privileged) are supported by all backends except `runpod`, `vastai`, and `kubernetes`. - [Clusters](../concepts/fleets.md#cloud-placement) @@ -120,7 +120,7 @@ If you are using you will not see marketplace offers until you top up your balance. Alternatively, you can configure your own cloud accounts on the [project settings page](../concepts/projects.md#backends) -or use [SSH fleets](../concepts/fleets.md#ssh). +or use [SSH fleets](../concepts/fleets.md#ssh-fleets). ### Provisioning fails diff --git a/docs/docs/installation/index.md b/docs/docs/installation/index.md index a27994189..073c4b176 100644 --- a/docs/docs/installation/index.md +++ b/docs/docs/installation/index.md @@ -15,7 +15,7 @@ Backends can be set up in `~/.dstack/server/config.yml` or through the [project For more details, see [Backends](../concepts/backends.md). ??? info "SSH fleets" - When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](../concepts/fleets.md#ssh) once the server is up. + When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](../concepts/fleets.md#ssh-fleets) once the server is up. ### Start the server diff --git a/docs/layouts/custom.yml b/docs/layouts/custom.yml index 8f67c16ce..0ab859b85 100644 --- a/docs/layouts/custom.yml +++ b/docs/layouts/custom.yml @@ -50,17 +50,17 @@ size: { width: 1200, height: 630 } layers: - background: color: "black" - - size: { width: 44, height: 44 } - offset: { x: 970, y: 521 } + - size: { width: 50, height: 50 } + offset: { x: 935, y: 521 } background: image: *logo - - size: { width: 300, height: 42 } - offset: { x: 1018, y: 525 } + - size: { width: 340, height: 55 } + offset: { x: 993, y: 521 } typography: content: *site_name color: "white" - - size: { width: 850, height: 320 } - offset: { x: 80, y: 115 } + - size: { width: 1000, height: 220 } + offset: { x: 80, y: 280 } typography: content: *page_title overflow: shrink @@ -69,15 +69,15 @@ layers: line: amount: 3 height: 1.25 - - size: { width: 850, height: 64 } - offset: { x: 80, y: 495 } - typography: - content: *page_description - align: start - color: "white" - line: - amount: 2 - height: 1.5 + # - size: { width: 850, height: 64 } + # offset: { x: 80, y: 495 } + # typography: + # content: *page_description + # align: start + # color: "white" + # line: + # amount: 2 + # height: 1.5 tags: diff --git a/examples/accelerators/amd/README.md b/examples/accelerators/amd/README.md index d75841d15..970db2d98 100644 --- a/examples/accelerators/amd/README.md +++ b/examples/accelerators/amd/README.md @@ -1,7 +1,7 @@ # AMD `dstack` supports running dev environments, tasks, and services on AMD GPUs. -You can do that by setting up an [SSH fleet](https://dstack.ai/docs/concepts/fleets#ssh) +You can do that by setting up an [SSH fleet](https://dstack.ai/docs/concepts/fleets#ssh-fleets) with on-prem AMD GPUs or configuring a backend that offers AMD GPUs such as the `runpod` backend. ## Deployment diff --git a/examples/accelerators/intel/README.md b/examples/accelerators/intel/README.md index f5025196e..5d59e2f95 100644 --- a/examples/accelerators/intel/README.md +++ b/examples/accelerators/intel/README.md @@ -1,7 +1,7 @@ # Intel Gaudi `dstack` supports running dev environments, tasks, and services on Intel Gaudi GPUs via -[SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh). +[SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets). ## Deployment diff --git a/examples/accelerators/tenstorrent/README.md b/examples/accelerators/tenstorrent/README.md index 0275e9e27..5ce33567e 100644 --- a/examples/accelerators/tenstorrent/README.md +++ b/examples/accelerators/tenstorrent/README.md @@ -43,7 +43,7 @@ image: https://dstack.ai/static-assets/static-assets/images/dstack-tenstorrent-m
- For more details on fleet configuration, refer to [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh). + For more details on fleet configuration, refer to [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh-fleets). ## Services