Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

List namespaces incorrectly reporting numa node on EL8 #211

Open
tanabarr opened this issue Jul 6, 2022 · 14 comments
Open

List namespaces incorrectly reporting numa node on EL8 #211

tanabarr opened this issue Jul 6, 2022 · 14 comments

Comments

@tanabarr
Copy link

tanabarr commented Jul 6, 2022

numa_node reported incorrectly (always zero) in ndctl list -v output (IceLake platform):

[jenkins@wolf-216 ~]$ sudo ndctl list -v
[
  {
    "dev":"namespace1.0",
    "mode":"fsdax",
    "map":"dev",
    "size":1065418227712,
    "uuid":"dc45a28b-c1f9-400f-8069-81222192f352",
    "raw_uuid":"af106459-315d-4184-a14c-d238b4404824",
    "sector_size":512,
    "align":2097152,
    "blockdev":"pmem1",
    "numa_node":0,
    "target_node":3
  },
  {
    "dev":"namespace0.0",
    "mode":"fsdax",
    "map":"dev",
    "size":1065418227712,
    "uuid":"45e7b01f-6e15-490f-8e24-448b0a391d7e",
    "raw_uuid":"3cd9b393-927a-4ac4-a11c-6e46f6512249",
    "sector_size":512,
    "align":2097152,
    "blockdev":"pmem0",
    "numa_node":0,
    "target_node":2
  }
]
[jenkins@wolf-216 ~]$ sudo ndctl  --version
71.1
[jenkins@wolf-216 ~]$ uname -a
Linux wolf-216.wolf.hpdd.intel.com 4.18.0-348.23.1.el8_5.x86_64 #1 SMP Wed Apr 27 15:32:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
[jenkins@wolf-216 ~]$ cat /etc/os-release
NAME="Rocky Linux"
VERSION="8.6 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.6 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8"
[jenkins@wolf-216 ~]$ cat /sys/bus/nd/devices/namespace*/numa_node
0
0
0
0
[jenkins@wolf-216 ~]$ sudo ndctl list -R
[
  {
    "dev":"region1",
    "size":1082331758592,
    "align":16777216,
    "available_size":0,
    "max_available_extent":0,
    "type":"pmem",
    "iset_id":8805981070158139664,
    "persistence_domain":"memory_controller"
  },
  {
    "dev":"region0",
    "size":1082331758592,
    "align":16777216,
    "available_size":0,
    "max_available_extent":0,
    "type":"pmem",
    "iset_id":3353810769403187472,
    "persistence_domain":"memory_controller"
  }
]
[jenkins@wolf-216 ~]$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
node 0 size: 128334 MB
node 0 free: 122851 MB
node 1 cpus: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103
node 1 size: 129010 MB
node 1 free: 127653 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10

Regression is not seen on Leap15 (CascadeLake) host with the same version of ndctl:

tanabarr@wolf-157:~> sudo ndctl list -v
[
  {
    "dev":"namespace1.0",
    "mode":"fsdax",
    "map":"dev",
    "size":3183575302144,
    "uuid":"c4646c21-9268-4282-812e-d7cfad732565",
    "raw_uuid":"eec21224-e28d-41f4-99c7-8d430a384826",
    "sector_size":512,
    "align":2097152,
    "blockdev":"pmem1",
    "numa_node":1,
    "target_node":3
  },
  {
    "dev":"namespace0.0",
    "mode":"fsdax",
    "map":"dev",
    "size":3183575302144,
    "uuid":"6f911e4f-90c9-475f-bcf9-1fa209f6026f",
    "raw_uuid":"2240f9ca-4410-4d3a-94d8-92b8d8f54815",
    "sector_size":512,
    "align":2097152,
    "blockdev":"pmem0",
    "numa_node":0,
    "target_node":2
  }
]
tanabarr@wolf-157:~> sudo ndctl --version
71.1
tanabarr@wolf-157:~> uname -a
Linux wolf-157 5.3.18-57-default #1 SMP Wed Apr 28 10:54:41 UTC 2021 (ba3c2e9) x86_64 x86_64 x86_64 GNU/Linux
tanabarr@wolf-157:~> sudo cat /etc/os-release
NAME="openSUSE Leap"
VERSION="15.3"
ID="opensuse-leap"
ID_LIKE="suse opensuse"
VERSION_ID="15.3"
PRETTY_NAME="openSUSE Leap 15.3"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap:15.3"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/"
tanabarr@wolf-157:~> cat /sys/bus/nd/devices/namespace*/numa_node
0
0
1
1

Similar to regression seen in CentOS 7 and fixed in CentOS 8: #130

@sscargal
Copy link

sscargal commented Jul 6, 2022

This appears to be a duplicate of intel/ipmctl#156 that was reported in the ipmctl GitHub issues. There's more data in that issue we should compare with this issue to confirm it's the same underlying problem. Namely:

ipmctl show -system > ipmctl_show_-system.out  

Look at the ProximityDomain within the 'NFIT' sections to see if they're zero (0x0) for all regions. If so, this is a BIOS bug/regression problem.

@tanabarr
Copy link
Author

Relevant output from ipmctl show -system showing the "ProximityDomain` values:

   ---TableType=0x0
      Length: 56 bytes
      TypeEquals: SpaRange
      AddressRangeType: 66f0d379-b4f3-4074-ac43-0d3318b78cdb
      SpaRangeDescriptionTableIndex: 0x1
      Flags: 0x2
      ProximityDomain: 0x3
      SystemPhysicalAddressRangeBase: 0x4080000000
      SystemPhysicalAddressRangeLength: 0xfc00000000
      MemoryMappingAttribute: 0x8008

   ---TableType=0x0
      Length: 56 bytes
      TypeEquals: SpaRange
      AddressRangeType: 66f0d379-b4f3-4074-ac43-0d3318b78cdb
      SpaRangeDescriptionTableIndex: 0x2
      Flags: 0x2
      ProximityDomain: 0x5
      SystemPhysicalAddressRangeBase: 0x13c80000000
      SystemPhysicalAddressRangeLength: 0xfc00000000
      MemoryMappingAttribute: 0x8008

@tanabarr
Copy link
Author

Also have an almost identical IceLake system which maps region0 to NUMA-1 and region1 to NUMA-0:

[root@wolf-220 ~]# ndctl list -Rv
{
  "regions":[
    {
      "dev":"region1",
      "size":1082331758592,
      "align":16777216,
      "available_size":0,
      "max_available_extent":0,
      "type":"pmem",
      "numa_node":0,
      "target_node":3,
      "iset_id":5581403737260495120,
      "persistence_domain":"memory_controller",
      "namespaces":[
        {
          "dev":"namespace1.0",
          "mode":"fsdax",
          "map":"dev",
          "size":1065418227712,
          "uuid":"bba603dd-ffb7-42f2-b1c9-8dedb82426f0",
          "raw_uuid":"4a85e25f-7a07-4a23-8ed2-4c5999b3d94d",
          "sector_size":512,
          "align":2097152,
          "blockdev":"pmem1",
          "numa_node":0,
          "target_node":3
        }
      ]
    },
    {
      "dev":"region0",
      "size":1082331758592,
      "align":16777216,
      "available_size":0,
      "max_available_extent":0,
      "type":"pmem",
      "numa_node":1,
      "target_node":2,
      "iset_id":-5133785675551928048,
      "persistence_domain":"memory_controller",
      "namespaces":[
        {
          "dev":"namespace0.0",
          "mode":"fsdax",
          "map":"dev",
          "size":1065418227712,
          "uuid":"aef43ead-4f0f-4756-b135-b3fee4980def",
          "raw_uuid":"85b60df5-3d89-4c4b-bf4e-2adbaa7ea75b",
          "sector_size":512,
          "align":2097152,
          "blockdev":"pmem0",
          "numa_node":1,
          "target_node":2
        }
      ]
    }
  ]
}

Any suggestions would be welcome on how to resolve.

@hramrach
Copy link
Contributor

Maybe you could at lest try the current ndctl?

https://build.opensuse.org/package/binaries/hardware:nvdimm/ndctl/openSUSE_Leap_15.3

@tanabarr
Copy link
Author

Maybe you could at lest try the current ndctl?

I am on the latest version provided by the distro I am running:

[root@wolf-220 ~]# ndctl -v
71.1
[root@wolf-220 ~]# cat /etc/os-release
NAME="Rocky Linux"
VERSION="8.6 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.6 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8"
[root@wolf-220 ~]# dnf update ndctl
Last metadata expiration check: 2:43:42 ago on Tue 26 Jul 2022 07:06:58 PM UTC.
Dependencies resolved.
Nothing to do.
Complete!

I will try to install a later version by building from source.

@hramrach
Copy link
Contributor

Right, you mentioned Leap but I missed that you mentioned it as the working version.

Then it likely works because of some random backports or different build configuration, and building the current version from source should get you somewhere either way.

@tanabarr
Copy link
Author

I re-provisioned the problem IceLake host with Leap15.3 and upgraded ipmctl to v03 and ndctl to v73 (exact versions below), problem persists (pmem0 on numa1 & pmem1 on numa0):

wolf-220:~ # ipmctl show -o nvmxml -a -region
<?xml version="1.0"?>
 <RegionList>
  <Region>
   <SocketID>0x0000</SocketID>
   <PersistentMemoryType>AppDirect</PersistentMemoryType>
   <Capacity>1008.000 GiB</Capacity>
   <FreeCapacity>1008.000 GiB</FreeCapacity>
   <HealthState>Healthy</HealthState>
   <DimmID>0x0001, 0x0011, 0x0101, 0x0111, 0x0201, 0x0211, 0x0301, 0x0311</DimmID>
   <RegionID>0x0001</RegionID>
   <ISetID>0xb8c12120c7bd1110</ISetID>
  </Region>
  <Region>
   <SocketID>0x0001</SocketID>
   <PersistentMemoryType>AppDirect</PersistentMemoryType>
   <Capacity>1008.000 GiB</Capacity>
   <FreeCapacity>1008.000 GiB</FreeCapacity>
   <HealthState>Healthy</HealthState>
   <DimmID>0x1001, 0x1011, 0x1101, 0x1111, 0x1201, 0x1211, 0x1301, 0x1311</DimmID>
   <RegionID>0x0002</RegionID>
   <ISetID>0x4d752120a3731110</ISetID>
  </Region>
 </RegionList>
wolf-220:~ # ndctl list -Rv
[
  {
    "dev":"region1",
    "size":1082331758592,
    "align":16777216,
    "available_size":1082331758592,
    "max_available_extent":1082331758592,
    "type":"pmem",
    "numa_node":0,
    "target_node":3,
    "iset_id":5581403737260495120,
    "persistence_domain":"memory_controller"
  },
  {
    "dev":"region0",
    "size":1082331758592,
    "align":16777216,
    "available_size":1082331758592,
    "max_available_extent":1082331758592,
    "type":"pmem",
    "numa_node":1,
    "target_node":2,
    "iset_id":-5133785675551928048,
    "persistence_domain":"memory_controller"
  }
]
wolf-220:~ # ndctl version
73
wolf-220:~ # ipmctl version
Intel(R) Optane(TM) Persistent Memory Command Line Interface Version 03.00.00.0423
wolf-220:~/leap15 # ipmctl create -f -goal MemoryMode=100
 SocketID | DimmID | MemorySize  | AppDirect1Size | AppDirect2Size
===================================================================
 0x0000   | 0x0001 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0000   | 0x0011 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0000   | 0x0101 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0000   | 0x0111 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0000   | 0x0201 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0000   | 0x0211 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0000   | 0x0301 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0000   | 0x0311 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0001   | 0x1001 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0001   | 0x1011 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0001   | 0x1101 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0001   | 0x1111 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0001   | 0x1201 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0001   | 0x1211 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0001   | 0x1301 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
 0x0001   | 0x1311 | 126.000 GiB | 0.000 GiB      | 0.000 GiB
A reboot is required to process new memory allocation goals.
wolf-220:~/leap15 # reboot
...
wolf-220:~ # ipmctl show -o nvmxml -a -region
<?xml version="1.0"?>
<Results>
<Result>
There are no Regions defined in the system.
</Result>
</Results>
wolf-220:~ # ipmctl create -f -goal PersistentMemoryType=AppDirect
 SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
 0x0000   | 0x0001 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0000   | 0x0101 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0000   | 0x0201 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0000   | 0x0301 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0000   | 0x0011 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0000   | 0x0111 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0000   | 0x0211 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0000   | 0x0311 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0001   | 0x1001 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0001   | 0x1101 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0001   | 0x1201 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0001   | 0x1301 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0001   | 0x1011 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0001   | 0x1111 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0001   | 0x1211 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
 0x0001   | 0x1311 | 0.000 GiB  | 126.000 GiB    | 0.000 GiB
A reboot is required to process new memory allocation goals.
wolf-220:~ # reboot
...
wolf-220:~ # ipmctl show -a -o nvmxml -region
<?xml version="1.0"?>
 <RegionList>
  <Region>
   <SocketID>0x0000</SocketID>
   <PersistentMemoryType>AppDirect</PersistentMemoryType>
   <Capacity>1008.000 GiB</Capacity>
   <FreeCapacity>1008.000 GiB</FreeCapacity>
   <HealthState>Healthy</HealthState>
   <DimmID>0x0001, 0x0011, 0x0101, 0x0111, 0x0201, 0x0211, 0x0301, 0x0311</DimmID>
   <RegionID>0x0001</RegionID>
   <ISetID>0xb8c12120c7bd1110</ISetID>
  </Region>
  <Region>
   <SocketID>0x0001</SocketID>
   <PersistentMemoryType>AppDirect</PersistentMemoryType>
   <Capacity>1008.000 GiB</Capacity>
   <FreeCapacity>1008.000 GiB</FreeCapacity>
   <HealthState>Healthy</HealthState>
   <DimmID>0x1001, 0x1011, 0x1101, 0x1111, 0x1201, 0x1211, 0x1301, 0x1311</DimmID>
   <RegionID>0x0002</RegionID>
   <ISetID>0x4d752120a3731110</ISetID>
  </Region>
 </RegionList>
wolf-220:~ # ndctl list -Rv
[
  {
    "dev":"region1",
    "size":1082331758592,
    "align":16777216,
    "available_size":1082331758592,
    "max_available_extent":1082331758592,
    "type":"pmem",
    "numa_node":0,
    "target_node":3,
    "iset_id":5581403737260495120,
    "persistence_domain":"memory_controller"
  },
  {
    "dev":"region0",
    "size":1082331758592,
    "align":16777216,
    "available_size":1082331758592,
    "max_available_extent":1082331758592,
    "type":"pmem",
    "numa_node":1,
    "target_node":2,
    "iset_id":-5133785675551928048,
    "persistence_domain":"memory_controller"
  }
]
wolf-220:~ # ndctl create-namespace --continue
{
  "dev":"namespace1.0",
  "mode":"fsdax",
  "map":"dev",
  "size":"992.25 GiB (1065.42 GB)",
  "uuid":"f630e26a-c1b2-445d-aeb5-75c747ff343c",
  "sector_size":512,
  "align":2097152,
  "blockdev":"pmem1"
}
{
  "dev":"namespace0.0",
  "mode":"fsdax",
  "map":"dev",
  "size":"992.25 GiB (1065.42 GB)",
  "uuid":"cbb9d287-68d6-4df3-8f3c-e526e8535630",
  "sector_size":512,
  "align":2097152,
  "blockdev":"pmem0"
}
created 2 namespaces
wolf-220:~ # ndctl list -Nv
[
  {
    "dev":"namespace1.0",
    "mode":"fsdax",
    "map":"dev",
    "size":1065418227712,
    "uuid":"f630e26a-c1b2-445d-aeb5-75c747ff343c",
    "raw_uuid":"185123bb-f5af-4a06-9a19-593cf7c3285f",
    "sector_size":512,
    "align":2097152,
    "blockdev":"pmem1",
    "numa_node":0,
    "target_node":3
  },
  {
    "dev":"namespace0.0",
    "mode":"fsdax",
    "map":"dev",
    "size":1065418227712,
    "uuid":"cbb9d287-68d6-4df3-8f3c-e526e8535630",
    "raw_uuid":"7f771db8-f1ec-4bdc-bcef-a33f4ad08fd3",
    "sector_size":512,
    "align":2097152,
    "blockdev":"pmem0",
    "numa_node":1,
    "target_node":2
  }
]

@sscargal any further ideas?

@hramrach
Copy link
Contributor

The numa_node is no longer zero so the reported problem is solved, right?

@tanabarr
Copy link
Author

Two systems have been mentioned, one has both pmem mapped to zero (wolf-216) and the other has pmem1 mapped to numa0 and pmem0 mapped to numa1 (wolf-220). Neither issue has been resolved.

@hramrach
Copy link
Contributor

I don't see a problem with pmem1 mapped to numa0 - the device order is not guaranteed.

As for both regions mapped to numa0 you did not provide log from the problem system with the current ndctl.

@tanabarr
Copy link
Author

tanabarr commented Aug 8, 2022

I don't see a problem with pmem1 mapped to numa0 - the device order is not guaranteed.

As for both regions mapped to numa0 you did not provide log from the problem system with the current ndctl.

Thanks for clarifying the device order issue, I haven't had access to the problem system that assigned both regions to numa0, I will try to get access to it and test that today.

@tanabarr
Copy link
Author

I got access to the system, re-provisioned to leap15.3 and updated ipmctl and ndctl to latest. After removing regions, rebooting, creating regions, rebooting, the same issue persists where both regions are on NUMA 0:

wolf-216:~ # ipmctl version
Intel(R) Optane(TM) Persistent Memory Command Line Interface Version 03.00.00.0423
wolf-216:~ # ipmctl show -a -region
---ISetID=0x2e8b212022281110---
   SocketID=0x0000
   PersistentMemoryType=AppDirect
   Capacity=1008.000 GiB
   FreeCapacity=1008.000 GiB
   HealthState=Healthy
   DimmID=0x0001, 0x0011, 0x0101, 0x0111, 0x0201, 0x0211, 0x0301, 0x0311
---ISetID=0x7a35212091971110---
   SocketID=0x0001
   PersistentMemoryType=AppDirect
   Capacity=1008.000 GiB
   FreeCapacity=1008.000 GiB
   HealthState=Healthy
   DimmID=0x1001, 0x1011, 0x1101, 0x1111, 0x1201, 0x1211, 0x1301, 0x1311
wolf-216:~ # ndctl version
73
wolf-216:~ # ndctl list -Rv
[
  {
    "dev":"region1",
    "size":1082331758592,
    "align":16777216,
    "available_size":1082331758592,
    "max_available_extent":1082331758592,
    "type":"pmem",
    "numa_node":0,
    "target_node":3,
    "iset_id":8805981070158139664,
    "persistence_domain":"memory_controller"
  },
  {
    "dev":"region0",
    "size":1082331758592,
    "align":16777216,
    "available_size":1082331758592,
    "max_available_extent":1082331758592,
    "type":"pmem",
    "numa_node":0,
    "target_node":2,
    "iset_id":3353810769403187472,
    "persistence_domain":"memory_controller"
  }
]
wolf-216:~ # ndctl create-namespace --continue
{
  "dev":"namespace1.0",
  "mode":"fsdax",
  "map":"dev",
  "size":"992.25 GiB (1065.42 GB)",
  "uuid":"0fcea6c3-ea89-40a2-bc52-fdfaab8794b0",
  "sector_size":512,
  "align":2097152,
  "blockdev":"pmem1"
}
{
  "dev":"namespace0.0",
  "mode":"fsdax",
  "map":"dev",
  "size":"992.25 GiB (1065.42 GB)",
  "uuid":"437ba6ea-94ac-4ec4-97be-b130d43057aa",
  "sector_size":512,
  "align":2097152,
  "blockdev":"pmem0"
}
created 2 namespaces
wolf-216:~ # ndctl list -Rv
{
  "regions":[
    {
      "dev":"region1",
      "size":1082331758592,
      "align":16777216,
      "available_size":0,
      "max_available_extent":0,
      "type":"pmem",
      "numa_node":0,
      "target_node":3,
      "iset_id":8805981070158139664,
      "persistence_domain":"memory_controller",
      "namespaces":[
        {
          "dev":"namespace1.0",
          "mode":"fsdax",
          "map":"dev",
          "size":1065418227712,
          "uuid":"0fcea6c3-ea89-40a2-bc52-fdfaab8794b0",
          "raw_uuid":"717d3546-68e9-4e1f-a52e-dd3acdc91af8",
          "sector_size":512,
          "align":2097152,
          "blockdev":"pmem1",
          "numa_node":0,
          "target_node":3
        }
      ]
    },
    {
      "dev":"region0",
      "size":1082331758592,
      "align":16777216,
      "available_size":0,
      "max_available_extent":0,
      "type":"pmem",
      "numa_node":0,
      "target_node":2,
      "iset_id":3353810769403187472,
      "persistence_domain":"memory_controller",
      "namespaces":[
        {
          "dev":"namespace0.0",
          "mode":"fsdax",
          "map":"dev",
          "size":1065418227712,
          "uuid":"437ba6ea-94ac-4ec4-97be-b130d43057aa",
          "raw_uuid":"0d630002-174a-4c0b-8eca-c4c900b857b0",
          "sector_size":512,
          "align":2097152,
          "blockdev":"pmem0",
          "numa_node":0,
          "target_node":2
        }
      ]
    }
  ]
}
wolf-216:~ # numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
node 0 size: 128593 MB
node 0 free: 127129 MB
node 1 cpus: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103
node 1 size: 129008 MB
node 1 free: 127889 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10
wolf-216:~ # uname -a
Linux wolf-216 5.3.18-150300.59.87-default #1 SMP Thu Jul 21 14:31:28 UTC 2022 (cc90276) x86_64 x86_64 x86_64 GNU/Linux

@tanabarr
Copy link
Author

Not directly related to intel/ipmctl#156 as ProximityDomain field values are nonzero:

   ---TableType=0x0
      Length: 56 bytes
      TypeEquals: SpaRange
      AddressRangeType: 66f0d379-b4f3-4074-ac43-0d3318b78cdb
      SpaRangeDescriptionTableIndex: 0x1
      Flags: 0x2
      ProximityDomain: 0x3
      SystemPhysicalAddressRangeBase: 0x4080000000
      SystemPhysicalAddressRangeLength: 0xfc00000000
      MemoryMappingAttribute: 0x8008

   ---TableType=0x0
      Length: 56 bytes
      TypeEquals: SpaRange
      AddressRangeType: 66f0d379-b4f3-4074-ac43-0d3318b78cdb
      SpaRangeDescriptionTableIndex: 0x2
      Flags: 0x2
      ProximityDomain: 0x5
      SystemPhysicalAddressRangeBase: 0x13c80000000
      SystemPhysicalAddressRangeLength: 0xfc00000000
      MemoryMappingAttribute: 0x8008

@hramrach
Copy link
Contributor

The numa node is simply:

        sprintf(path, "%s/numa_node", ndns_base);
        if (sysfs_read_attr(ctx, path, buf) == 0)
                ndns->numa_node = strtol(buf, NULL, 0);
        else
                ndns->numa_node = -1;

find /sys -name numa_node -exec echo -n {}: "" \; -exec cat {} \;
/sys/devices/ndbus0/region0/namespace0.0/numa_node: 0
/sys/devices/ndbus0/region0/dax0.0/numa_node: 0
/sys/devices/ndbus0/region0/numa_node: 0
/sys/devices/ndbus0/region0/pfn0.0/numa_node: 0
/sys/devices/ndbus0/region0/btt0.0/numa_node: 0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants