Skip to content

Adding nica2a preset #248

Merged
nileshnegi merged 7 commits intoROCm:candidatefrom
pierreantoineH:candidate_a2anicpreset
Apr 29, 2026
Merged

Adding nica2a preset #248
nileshnegi merged 7 commits intoROCm:candidatefrom
pierreantoineH:candidate_a2anicpreset

Conversation

@pierreantoineH
Copy link
Copy Markdown

Description

  • Adding nica2a preset (NIC all-to-all over GPUs via NIC executors, multi-node): stride/device grouping and NIC planes.

    • STRIDE — Step size for stride orbits on rank-major devices (gcd with total device count); no traffic between different orbits.
    • GROUP_SIZE — Devices per subgroup inside each stride orbit (natural rank-major order); must divide orbit size (default: all devices per rank × GPUs).
    • NIC_A2A_SCOPEintra: transfers only within the same device subgroup; inter: only between different subgroups (same stride orbit only).
    • NIC_A2A_NO_SAME_RANK — When non-zero, omit transfers where source and destination are the same rank.
    • NUM_NIC_PLANES — Split NIC endpoints into this many disjoint planes (rank-major index modulo planes); traffic only between NICs in the same plane.

Motivation

Need to stress scale-out network independently from intra node communication with Alltoall pattern

Technical Details

The pattern is defined to use all NICs, can be used in the case of several NICs per GPU .
The stride startegy differ from the one defined in PodAlltoall . Here natural order inside each stride orbit is used. Syntax close to the one of PodAlltoall.
The scope allows to considers running inside the same groups but the need was also to run only between groups.
Also an option in terms of number of planes have been added (can also be used in rail topology) to be able to restrict a2a pattern to have only peers connected through scale-out network.

Test Plan

Tested up to 6 nodes varying different variables.

Test Result

Expected patterns used in each case

…ulti-node): stride/device grouping and NIC planes.

  - `STRIDE` — Step size for stride orbits on rank-major devices (`gcd` with total device count); no traffic between different orbits.
  - `GROUP_SIZE` — Devices per subgroup inside each stride orbit (natural rank-major order); must divide orbit size (default: all devices per rank × GPUs).
  - `NIC_A2A_SCOPE` — `intra`: transfers only within the same device subgroup; `inter`: only between different subgroups (same stride orbit only).
  - `NIC_A2A_NO_SAME_RANK` — When non-zero, omit transfers where source and destination are the same rank.
  - `NUM_NIC_PLANES` — Split NIC endpoints into this many disjoint planes (rank-major index modulo planes); traffic only between NICs in the same plane.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new nica2a preset to TransferBench to generate NIC-executor-based all-to-all traffic across GPUs with configurable device stride/grouping, intra/inter-group scope, same-rank filtering, and NIC plane partitioning—intended to stress scale-out networking independently of intra-node GPU communication.

Changes:

  • Register new nica2a preset in the preset map.
  • Implement NicAllToAllPreset with STRIDE/GROUP_SIZE orbit grouping, NIC plane filtering, and scope controls.
  • Document the new preset and its environment variables in the changelog.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.

File Description
src/client/Presets/Presets.hpp Adds nica2a preset entry and includes the new preset header.
src/client/Presets/NicAllToAll.hpp Implements the NIC all-to-all preset logic, transfer generation, and reporting table.
CHANGELOG.md Documents the new preset and its configuration knobs.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/client/Presets/NicAllToAll.hpp Outdated
Comment thread src/client/Presets/NicAllToAll.hpp Outdated
Comment thread src/client/Presets/NicAllToAll.hpp Outdated
Comment thread src/client/Presets/NicAllToAll.hpp Outdated
Comment thread src/client/Presets/NicAllToAll.hpp Outdated
Comment thread src/client/Presets/NicAllToAll.hpp Outdated
@AtlantaPepsi
Copy link
Copy Markdown
Contributor

I'll test the new changes later today and tomorrow

Comment thread src/client/Presets/Presets.hpp Outdated
… limits

- Add missing `bool bytesSpecified` param to NicAllToAllPreset to match
  PresetFunc typedef; this was causing a compile error
- Fix per-NIC bandwidth attribution in RDMA-read mode: use dstRanks[i] as
  the row index (executor is on dstRank, not srcRank) and exeDevice.exeIndex
  as the NIC column (always the executor's NIC, consistent with NicPeerToPeer)
- Replace std::numeric_limits<double>::min() with ::lowest() for nicMax and
  rankMax to match NicRings.hpp convention
- Fix spacing alignment of NicAllToAllPreset entry in preset map
Copilot AI review requested due to automatic review settings April 12, 2026 16:21
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/client/Presets/NicAllToAll.hpp Outdated
Comment thread CHANGELOG.md Outdated
@nileshnegi
Copy link
Copy Markdown
Collaborator

@pierreantoineH was encountering build error, so pushed a fix.
after this, nica2a works for me. tested up to 8 nodes.
please let me know if these changes look okay.

Default GROUP_SIZE to orbitSize (M / gcd(STRIDE, M)) instead of M so
the preset works out-of-the-box with non-trivial STRIDE values. Update
CHANGELOG to document the stride-aware default.
@nileshnegi
Copy link
Copy Markdown
Collaborator

if i understand correctly, mpirun -np 8 ./TransferBench nica2a will do all possible node-NIC combinations, and using something like mpirun -np 8 -x STRIDE=8 ./TransferBench nica2a will keep all node-NIC combinations within the rail?

@pierreantoineH
Copy link
Copy Markdown
Author

if i understand correctly, mpirun -np 8 ./TransferBench nica2a will do all possible node-NIC combinations, and using something like mpirun -np 8 -x STRIDE=8 ./TransferBench nica2a will keep all node-NIC combinations within the rail?

The STRIDE argument is making groups of GPUs every STRIDE GPUs and doing communications restricted to those GPUs. Inside 1 group of those GPUs (called orbit) , they still might be split into several subgroups (depending on the GROUP_SIZE) and the communications can be between gpus inside the same subgroup, or between GPUs not in the same subgroups (depending on the NIC_A2A_SCOPE). The STRIDE option is targeted to have groups of GPU like it might be used in some parallelization strategy in workloads. So correct for your question in the effect expected.
Nevertheless there is another option NUM_NIC_PLANES , which is really similar but which target to select GPUs on same planes or rails. NUM_NIC_PLANES target to adapt to physical topologies while STRIDE target to experiment different kind of communication patterns. And both options can be used simultaneously.

@AtlantaPepsi AtlantaPepsi requested a review from nileshnegi April 28, 2026 16:28
Copilot AI review requested due to automatic review settings April 29, 2026 01:20
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/client/Presets/NicAllToAll.hpp Outdated
Comment thread src/client/Presets/NicAllToAll.hpp Outdated
Comment thread src/client/Presets/Presets.hpp Outdated
Comment thread CHANGELOG.md Outdated
- Make nica2a output safer and clearer by skipping with warnings when
  multiple rank groups are detected
- Surface per-rank NIC->GPU/CPU mapping when columns are mixed
- Derive bandwidth attribution directly from executor metadata
  instead of side arrays
- Update the preset description to reflect GPU/CPU endpoint mapping
@nileshnegi nileshnegi merged commit c6c3636 into ROCm:candidate Apr 29, 2026
4 checks passed
nileshnegi added a commit that referenced this pull request May 2, 2026
- Initial pod communication support (#235)
- cuda + MNNVL update & pod presets (#241)
- Increase CQ size for high qps (#244)
- fix hang when NVML is present but fabricmanager isnt (#246)
- Adding nica2a preset  (#248)
- Adding HBM read bandwidth preset (#250)
- Pod Ring preset (#251)
- gfxsweep preset (#254) (#256)
- Adding Batched DMA support (hipMemcpyBatchAsync), and bmasweep preset (#255)
- Adding a wallclock consistency detection preset (#258)
- Adding smoketest preset for simple correctness tests (#266)
- Help / envvars / presets presets (#267)
- Modernize CMake build (#268)
- Replace version-based pod/amd-smi detection with compile-time API probes (#269)
- Fix collective mismatch hangs in multi-rank error paths (#270)
- Fix SHOW_ITERATIONS table truncation with multiple transfers per executor (#271)
- Reformat a2asweep output to match gfxsweep style (#272)
- Gfx sweep update (#274)
- Increasing flush frequency in smoketest (#275)
- Adding new experimental copy-only GFX kernel, gfxsweep update (#277)
- Fixes for cuMem compilation and invalid device ordinal (#278)
- Simplifying socket connect, allow for using host address (#279)
- Updating podring to run on single node without need to force single pod (#280)
- Adding SHOW_PERCENTILES to show extra per-iteration statistics (#281)

---------

Co-authored-by: AtlantaPepsi <timhu102@gmail.com>
Co-authored-by: Pak Nin Lui <pak.lui@amd.com>
Co-authored-by: pierreantoineH <PierreAntoine.Harraud@amd.com>
Co-authored-by: Nilesh M Negi <Nilesh.Negi@amd.com>
Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@nileshnegi nileshnegi mentioned this pull request May 2, 2026
1 task
nileshnegi added a commit that referenced this pull request May 2, 2026
- Initial pod communication support (#235)
- cuda + MNNVL update & pod presets (#241)
- Increase CQ size for high qps (#244)
- fix hang when NVML is present but fabricmanager isnt (#246)
- Adding nica2a preset  (#248)
- Adding HBM read bandwidth preset (#250)
- Pod Ring preset (#251)
- gfxsweep preset (#254) (#256)
- Adding Batched DMA support (hipMemcpyBatchAsync), and bmasweep preset (#255)
- Adding a wallclock consistency detection preset (#258)
- Adding smoketest preset for simple correctness tests (#266)
- Help / envvars / presets presets (#267)
- Modernize CMake build (#268)
- Replace version-based pod/amd-smi detection with compile-time API probes (#269)
- Fix collective mismatch hangs in multi-rank error paths (#270)
- Fix SHOW_ITERATIONS table truncation with multiple transfers per executor (#271)
- Reformat a2asweep output to match gfxsweep style (#272)
- Gfx sweep update (#274)
- Increasing flush frequency in smoketest (#275)
- Adding new experimental copy-only GFX kernel, gfxsweep update (#277)
- Fixes for cuMem compilation and invalid device ordinal (#278)
- Simplifying socket connect, allow for using host address (#279)
- Updating podring to run on single node without need to force single pod (#280)
- Adding SHOW_PERCENTILES to show extra per-iteration statistics (#281)

---------

Co-authored-by: Tim <43156029+AtlantaPepsi@users.noreply.github.com>
Co-authored-by: Pak Nin Lui <pak.lui@amd.com>
Co-authored-by: pierreantoineH <PierreAntoine.Harraud@amd.com>
Co-authored-by: Nilesh M Negi <Nilesh.Negi@amd.com>
Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants