From bfa4bf4c8a2ca97549de08bea4d08b33294c065b Mon Sep 17 00:00:00 2001 From: Changelog bot Date: Mon, 11 Aug 2025 13:55:55 +0000 Subject: [PATCH 1/2] feat(changelog): add new entry --- ...anged-gpu-h100-sxm-instances-with-2-and.mdx | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx diff --git a/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx b/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx new file mode 100644 index 0000000000..61753a3207 --- /dev/null +++ b/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx @@ -0,0 +1,18 @@ +--- +title: GPU H100 SXM Instances with 2 and 4 GPUs now available in par-2 +status: changed +date: 2025-08-11 +category: compute +product: gpu-instances +--- + +Following the launch of our latest H100-SXM GPU Instances, delivering industry-leading conversational AI and speeding up large language models (LLMs), we are delighted to announce the availability of these instances in 2 GPUs and 4 GPUs sizes. The NVlink GPU-GPU communications and the 4 GPUs size brings even more possibilities and higher performance for your deployments. Available in the Paris (PAR2) region. + +Key features include: + +- Nvidia H100 SXM 80GB GB (Hopper architecture) +- 4th generation Tensor cores +- 4th generation NVlink, which offers 900 GB/s of GPU-to-GPU interconnect +- Transformer Engine +- Available now in 2, 4 and 8 GPUs per VM (Additional stock deployments on-going) + From f82deba99e26272b7e8bd89eb72390da7dadb6c9 Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Mon, 11 Aug 2025 17:29:19 +0200 Subject: [PATCH 2/2] Update changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx --- ...u-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx b/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx index 61753a3207..489e88c232 100644 --- a/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx +++ b/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx @@ -6,10 +6,11 @@ category: compute product: gpu-instances --- -Following the launch of our latest H100-SXM GPU Instances, delivering industry-leading conversational AI and speeding up large language models (LLMs), we are delighted to announce the availability of these instances in 2 GPUs and 4 GPUs sizes. The NVlink GPU-GPU communications and the 4 GPUs size brings even more possibilities and higher performance for your deployments. Available in the Paris (PAR2) region. +Following the launch of our H100-SXM GPU Instances — delivering industry-leading conversational AI performance and accelerating large language models (LLMs) — we’re pleased to announce the availability of new 2-GPU and 4-GPU configurations. -Key features include: +With NVLink GPU-to-GPU communication, the 4-GPU option unlocks even greater possibilities and higher performance for your deployments. Now available in the Paris (PAR2) region. +Key features include: - Nvidia H100 SXM 80GB GB (Hopper architecture) - 4th generation Tensor cores - 4th generation NVlink, which offers 900 GB/s of GPU-to-GPU interconnect