From 680f33f21dae18d5162794ccf1708c3be904128d Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 18 Nov 2025 15:04:20 +0100 Subject: [PATCH] fix typo --- pages/gpu/reference-content/migration-h100.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx index df1ce474f3..b15334e3b3 100644 --- a/pages/gpu/reference-content/migration-h100.mdx +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -86,7 +86,7 @@ H100 PCIe-based GPU Instances are not End-of-Life (EOL), but due to limited avai #### Is H100-SXM-2-80G compatible with my current setup? Yes — it runs the same CUDA toolchain and supports standard frameworks (PyTorch, TensorFlow, etc.). No changes in your code base are required when upgrading to a SXM-based GPU Instance. -### Why is the H100-SXM better for multi-GPU workloads? +#### Why is the H100-SXM better for multi-GPU workloads? The NVIDIA H100-SXM outperforms the H100-PCIe in multi-GPU configurations primarily due to its higher interconnect bandwidth and greater power capacity. It uses fourth-generation NVLink and NVSwitch, delivering up to **900 GB/s of bidirectional bandwidth** for fast GPU-to-GPU communication. In contrast, the H100-PCIe is limited to a **theoretical maximum of 128 GB/s** via PCIe Gen 5, which becomes a bottleneck in communication-heavy workloads such as large-scale AI training and HPC. The H100-SXM also provides **HBM3e memory** with up to **3.35 TB/s of bandwidth**, compared to **2 TB/s** with the H100-PCIe’s HBM3, improving performance in memory-bound tasks.