diff --git a/cloud-accounts/node-groups.mdx b/cloud-accounts/node-groups.mdx
index 50d1df3..64d5e64 100644
--- a/cloud-accounts/node-groups.mdx
+++ b/cloud-accounts/node-groups.mdx
@@ -112,9 +112,11 @@ Create custom node groups for specialized workloads like GPU processing, high-me
For GPU workloads, select instances with GPU support:
- - **AWS**: `g4dn.xlarge`, `p3.2xlarge`
+ - **AWS**: `g4dn.xlarge`, `p3.2xlarge`, `p4d.24xlarge`, `p5.4xlarge`, `p5.48xlarge`, `p5e.48xlarge`, `p5en.48xlarge`
- **Azure**: `Standard_NC4as_T4_v3`
- **GCP**: `g2-standard-4`
+
+ See [Running GPU workloads](/other/gpu-workloads#supported-gpu-instance-types) for the full list of supported GPU instance types and their specs.
diff --git a/other/gpu-workloads.mdx b/other/gpu-workloads.mdx
index 601b865..19006a0 100644
--- a/other/gpu-workloads.mdx
+++ b/other/gpu-workloads.mdx
@@ -35,7 +35,7 @@ CPU workloads.
| Setting | Description |
|---------|-------------|
- | **Instance type** | Select a GPU-enabled instance type (see table below) |
+ | **Instance type** | Select a GPU-enabled instance type (see [Supported GPU instance types](#supported-gpu-instance-types) below) |
| **Minimum nodes** | Select minimum number of nodes that will be available at all times |
| **Maximum nodes** | The upper limit for autoscaling based on demand |
@@ -51,6 +51,24 @@ CPU workloads.
+## Supported GPU instance types
+
+Porter supports a range of NVIDIA GPU instance types on AWS. Choose the instance that matches your workload's compute, memory, and VRAM requirements.
+
+| Instance type | vCPUs | RAM | GPUs | GPU type | GPU memory |
+|---------------|-------|-----|------|----------|------------|
+| `g4dn.xlarge` | 4 | 16 GiB | 1 | NVIDIA T4 | 16 GB |
+| `p3.2xlarge` | 8 | 61 GiB | 1 | NVIDIA V100 | 16 GB |
+| `p4d.24xlarge` | 96 | 1,152 GiB | 8 | NVIDIA A100 | 320 GB |
+| `p5.4xlarge` | 16 | 256 GiB | 1 | NVIDIA H100 | 80 GB |
+| `p5.48xlarge` | 192 | 2 TiB | 8 | NVIDIA H100 | 640 GB |
+| `p5e.48xlarge` | 192 | 2 TiB | 8 | NVIDIA H200 | 1,128 GB |
+| `p5en.48xlarge` | 192 | 2 TiB | 8 | NVIDIA H200 | 1,128 GB |
+
+
+ The full p5 family (`p5.4xlarge`, `p5.48xlarge`, `p5e.48xlarge`, and `p5en.48xlarge`) is suited for large-scale training and inference of foundation models. Use `p5.4xlarge` for single-GPU H100 workloads, and the `p5e`/`p5en` variants when you need H200 GPUs with expanded VRAM for larger models. Availability varies by region — check the AWS console for the latest region support.
+
+
## Deploying a GPU Application
Once your GPU node group is ready, you can deploy applications that use GPU resources.