Skip to content

Fixing NUMA checks (set_mempolicy)#303

Merged
nileshnegi merged 4 commits into
ROCm:candidatefrom
gilbertlee-amd:NumaFixes
May 15, 2026
Merged

Fixing NUMA checks (set_mempolicy)#303
nileshnegi merged 4 commits into
ROCm:candidatefrom
gilbertlee-amd:NumaFixes

Conversation

@gilbertlee-amd
Copy link
Copy Markdown
Collaborator

Motivation

Some NUMA nodes might not be able to allocate memory.
This PR adds early checks for this, and also changes how hsa_agents are collected

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to improve robustness of NUMA-related configuration by adding early validation that selected NUMA nodes can allocate memory under the current process policy, and it changes how CPU HSA agents are collected (moving from an allocation-based discovery to iterating HSA agents).

Changes:

  • Add NUMA mempolicy/cpuset-based checks during MemDevice validation to fail early when a CPU/closest-CPU NUMA node cannot allocate memory.
  • Rework CPU HSA agent discovery in System::CollectTopology() to use hsa_iterate_agents() and HSA_AGENT_INFO_NODE instead of allocating CPU memory and querying pointer ownership.
  • Minor refactor in MEM_CPU_CLOSEST validation to store the closest NUMA index in a local variable.
Comments suppressed due to low confidence (3)

src/header/TransferBench.hpp:1880

  • Same as the CPU path above: this condition is checking whether the closest NUMA node is permitted by the current process memory policy (numa_get_mems_allowed), not whether the CPU node is "equipped". Please adjust the error message to explicitly reference mempolicy/cpuset restrictions to avoid misleading users.
        if (GetRank() == memDevice.memRank && !numa_bitmask_isbitset(numa_get_mems_allowed(), actualNumaIdx))
          return {ERR_FATAL, "CPU %d on rank %d is not equipped to be able to allocate memory", actualNumaIdx, memDevice.memRank};

src/header/TransferBench.hpp:7784

  • CollectTopology now calls hsa_shut_down() immediately after hsa_iterate_agents(), but the function continues to call HSA APIs afterward (hsa_amd_pointer_info below, and hsa_agent_get_info inside GetRankTopology which is called later in this same function). Shutting HSA down here is very likely to break those subsequent calls; please avoid calling hsa_shut_down in CollectTopology (or ensure HSA is initialized for the remainder of the program) and propagate/handle hsa_init/hsa_iterate_agents errors via the existing ErrResult/logging instead of printf.
      hsa_init();
      std::map<int, hsa_agent_t> cpuAgentMap;
      hsa_status_t s = hsa_iterate_agents(cpuAgentCallback, &cpuAgentMap);
      if (s != HSA_STATUS_SUCCESS) {
        const char *errString = NULL;
        hsa_status_string(s, &errString);
        printf("FAIL %s\n",errString );
      }
      hsa_shut_down();

src/header/TransferBench.hpp:7793

  • cpuAgents is resized and then only conditionally filled from cpuAgentMap; any missing entry leaves a default-initialized hsa_agent_t (handle==0). Later code assumes cpuAgents entries are valid (e.g., closest-CPU detection and DMA engine validation), so this can lead to incorrect topology or failures. Please either (a) build cpuAgents only for nodes you can actually map (and update the reported CPU executor count accordingly), or (b) detect missing mappings and return/record a fatal error instead of silently leaving invalid agents.
      cpuAgents.clear();
      int numCpus = numa_num_configured_nodes();
      cpuAgents.resize(numCpus);
      for (int i = 0; i < numCpus; i++) {
        if (cpuAgentMap.count(i)) {
          cpuAgents[i] = cpuAgentMap[i];
        }
      }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/header/TransferBench.hpp
Comment thread src/header/TransferBench.hpp Outdated
@nileshnegi nileshnegi merged commit 24cfdc7 into ROCm:candidate May 15, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants