Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
title: GPU H100 SXM Instances with 2 and 4 GPUs now available in par-2
status: changed
date: 2025-08-11
category: compute
product: gpu-instances
---

Following the launch of our H100-SXM GPU Instances — delivering industry-leading conversational AI performance and accelerating large language models (LLMs) — we’re pleased to announce the availability of new 2-GPU and 4-GPU configurations.

With NVLink GPU-to-GPU communication, the 4-GPU option unlocks even greater possibilities and higher performance for your deployments. Now available in the Paris (PAR2) region.

Key features include:
- Nvidia H100 SXM 80GB GB (Hopper architecture)
- 4th generation Tensor cores
- 4th generation NVlink, which offers 900 GB/s of GPU-to-GPU interconnect
- Transformer Engine
- Available now in 2, 4 and 8 GPUs per VM (Additional stock deployments on-going)

Loading