diff --git a/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx b/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx new file mode 100644 index 0000000000..489e88c232 --- /dev/null +++ b/changelog/august2025/2025-08-11-gpu-instances-changed-gpu-h100-sxm-instances-with-2-and.mdx @@ -0,0 +1,19 @@ +--- +title: GPU H100 SXM Instances with 2 and 4 GPUs now available in par-2 +status: changed +date: 2025-08-11 +category: compute +product: gpu-instances +--- + +Following the launch of our H100-SXM GPU Instances — delivering industry-leading conversational AI performance and accelerating large language models (LLMs) — we’re pleased to announce the availability of new 2-GPU and 4-GPU configurations. + +With NVLink GPU-to-GPU communication, the 4-GPU option unlocks even greater possibilities and higher performance for your deployments. Now available in the Paris (PAR2) region. + +Key features include: +- Nvidia H100 SXM 80GB GB (Hopper architecture) +- 4th generation Tensor cores +- 4th generation NVlink, which offers 900 GB/s of GPU-to-GPU interconnect +- Transformer Engine +- Available now in 2, 4 and 8 GPUs per VM (Additional stock deployments on-going) +