Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

irqbalance not working on pa-risc? #159

Closed
paride opened this issue Aug 25, 2020 · 32 comments
Closed

irqbalance not working on pa-risc? #159

paride opened this issue Aug 25, 2020 · 32 comments

Comments

@paride
Copy link
Contributor

paride commented Aug 25, 2020

Hi, maintainer of the irqbalance Debian package here. The package currently ships irqbalance 1.6.0 with the following patch applied (not written by me):

Description: fix FTBFS on hppa
 Upstream irqbalance fails to build on hppa because of two reasons:
 .
 1. irqbalance fails to correctly detect the actual CPU count, this is because
    on parisc file like /sys/devices/system/cpu/cpu0/online don't exist,
    instead /sys/devices/system/cpu/cpu0/hotplug/state needs to be examimend.
 .
 2. On newer kernels you can't echo 0xfffffff into the files like
    /proc/irq/100/smp_affinity. This returns EOVERFLOW on newer kernels, which
    is probably why it fails on parisc (we run latest kernels on the buildd),
    while other architectures have older kernels.
 .
 This patch fixes both issues.
Author: Helge Deller <deller@gmx.de>
Bug-Debian: https://bugs.debian.org/919204

diff -up ./activate.c.org ./activate.c
--- ./activate.c.org	2018-12-29 11:38:19.399024158 +0100
+++ ./activate.c	2018-12-29 11:49:51.929217483 +0100
@@ -88,6 +88,9 @@ static void activate_mapping(struct irq_
 	if (!file)
 		return;
 
+	/* mask only possible cpus, otherwise writing to procfs returns EOVERFLOW */
+	cpus_and(applied_mask, applied_mask, cpu_possible_map);
+
 	cpumask_scnprintf(buf, PATH_MAX, applied_mask);
 	fprintf(file, "%s", buf);
 	fclose(file);
diff -up ./classify.c.org ./classify.c
diff -up ./cputree.c.org ./cputree.c
--- ./cputree.c.org	2018-12-29 03:32:26.269546669 +0100
+++ ./cputree.c	2018-12-29 11:28:06.316150924 +0100
@@ -259,6 +259,10 @@ static void do_one_cpu(char *path)
 	/* skip offline cpus */
 	snprintf(new_path, ADJ_SIZE(path,"/online"), "%s/online", path);
 	file = fopen(new_path, "r");
+	if (!file) {
+		snprintf(new_path, ADJ_SIZE(path,"/hotplug/state"), "%s/hotplug/state", path);
+		file = fopen(new_path, "r");
+	}
 	if (file) {
 		char *line = NULL;
 		size_t size = 0;

I am trying to understand the issues are still present in v1.7.0 and, if this is the case, if the patch should be picked up in this git repo. IIUC there are two issues:

  1. Files /sys/devices/system/cpu/cpu0/online are missing on PA-RISC systems. This indeed seems to be the case:
paride@panama:~$ uname -a
Linux panama 5.7.0-2-parisc64 #1 SMP Debian 5.7.10-1 (2020-07-26) parisc64 GNU/Linux
paride@panama:~$ ls -l /sys/devices/system/cpu/cpu*/online
ls: cannot access '/sys/devices/system/cpu/cpu*/online': No such file or directory

Should irqbalance fallback to /sys/devices/system/cpu/cpu0/hotplug/state in this case, as the patch does?

  1. EOVERFLOW when echoing 0xfffffff to /proc/irq/100/smp_affinity. This does't seems strictly related to pa-risc, but the problem doesn't happen on my x86_64 system. Perhaps this is fixed already?

I am not root on the pa-risc system I mentioned to I can't fully test irqbalance there. Let me know if you need any other bit of information. Thanks!

@nhorman
Copy link
Member

nhorman commented Aug 25, 2020

I see how this would fix both problems, and I'm ok with the EOVERFLOW fix, but I'm a bit lost on the sysfs fix. By all rights the online attribute should exist for parisc cpus (from what I can see in the kernel code, that attribute is arch agnostic), and so should be there. Instead of papering over the problem to avoid using it, could you please look into why parisc systems don't present that file?

If you want to open a PR for the EOVERFLOW issue, I'll gladly pull that

@paride
Copy link
Contributor Author

paride commented Aug 25, 2020

Thanks @nhorman, I will investigate the sysfs thing a bit more. Pinging @hdeller (author of the patch above), just in case he knows already and chimes in.

@paride
Copy link
Contributor Author

paride commented Aug 26, 2020

I noticed that on the parisc system I have access to there is actually only one CPU available:

paride@panama:~$ nproc
1
paride@panama:~$ cat /proc/cpuinfo 
processor	: 0
cpu family	: PA-RISC 2.0
cpu		: PA8900 (Shortfin)
cpu MHz		: 800.002700
physical id	: 0
siblings	: 1
core id		: 0
capabilities	: os64 iopdir_fdc needs_equivalent_aliasing (0x35)
model		: 9000/800/rp3410  
model name	: Storm Peak DC- Slow Mako+
hversion	: 0x00008970
sversion	: 0x00000491
I-cache		: 65536 KB
D-cache		: 65536 KB (WB, direct mapped)
ITLB entries	: 240
DTLB entries	: 240 - shared with ITLB
bogomips	: 1594.36
software id	: 4467610952098776727

despite the kernel being SMP:

paride@panama:~$ uname -a
Linux panama 5.7.0-3-parisc64 #1 SMP Debian 5.7.17-1 (2020-08-25) parisc64 GNU/Linux

and the system apparently having multiple CPUs:

paride@panama:~$ ls -1d /sys/devices/system/cpu/cpu*
/sys/devices/system/cpu/cpu0
/sys/devices/system/cpu/cpu1
/sys/devices/system/cpu/cpu2
/sys/devices/system/cpu/cpu3
/sys/devices/system/cpu/cpu4
/sys/devices/system/cpu/cpu5
/sys/devices/system/cpu/cpu6
/sys/devices/system/cpu/cpu7

This could be the reason why the /sys/devices/system/cpu/cpu*/online files do not exist: maybe the kernel doesn't expose the online file if SMP is not actually used (just a supposition, I didn't check the code). Still investigating.

@nhorman
Copy link
Member

nhorman commented Aug 26, 2020

you know what, it probably is. Given that sysfs show 8 cpus, but you only have one physical cpu, thats likely the result of sysfs getting populated based on the kernels cpu_possible mask (generated from the NR_CPUS configuration variable I think). And they don't list as online because they don't actually exists. As for cpu0, its the boot processor so it always has to be online.

Given that, I think the right thing to do here is:

  1. Assume that if /sys/devices/system/cpu/cpu/online doesn't exist, we should treat the cpu as unavailable for balancing
  2. The lone exception to (1) is cpu0, which always has to be online

I don't think we should rely on hotplug/state, as there is no guarantee that file will always be their either (i.e. if its not configured into the kernel). Better to just assume cpuN is offline if the online attribute doesn't exist (where N != 0)

@hdeller
Copy link

hdeller commented Aug 26, 2020 via email

@nhorman
Copy link
Member

nhorman commented Aug 26, 2020

I'm sorry, can you clarify? I presume this is a parisc system you are looking at? If the system is truly a 4-way smp system, it would seem that cpu1 cpu2 and cpu3 should have the online attribute (cpu0 being default online, since its the bsp). If the attribute doesn't exist and the system is truly smp, it seems that the sysfs attributes in the kernel here have a bug

@hdeller
Copy link

hdeller commented Aug 26, 2020 via email

@nhorman
Copy link
Member

nhorman commented Aug 26, 2020

I understand what you're saying, but that really doesn't give me additional confidence regarding your fix:

  1. If parisc has no implementation of cpu hotplug features, i'm even more reluctant to rely on hotplug/state attributes as a fallback to the per-cpu online attribute

  2. looking at drivers/base/cpu.c in the linux kernel, the cpu online attribute (as defined by the cpu_attrs array) doesn't appear to be gated on any hotplug feature conditional. In fact that array appears to get registered as part of the generic cpu attributes group that should get created for every registered cpu. given that you seem to have /sys/devices/system/cpu/cpu[0,1,2,3,4...] directories created, I'm working under the assumption that cpu_dev_register_generic was called, registering each possible cpu on the system. So I'm really at a loss for why the online attribute doesn't exist.

I really think (2) is what we need to figure out here. By all rights that attribute should be there, and it isn't. Either that or we need a solid explanation of why it doesn't exist

@hdeller
Copy link

hdeller commented Aug 26, 2020 via email

@nhorman
Copy link
Member

nhorman commented Aug 26, 2020

yes, as I noted here, it woudl seem the bsp never gets an online attribute (possibly a bug, possibly intentional as the bsp should never be taken offline). The other non-bsp cpus will have online attributes however, on x86 systems (or arm/power systems, as far as I'm able to tell). Its just parisc that doesn't, which seems wrong to me.

@hdeller
Copy link

hdeller commented Aug 26, 2020 via email

@ppwaskie
Copy link
Contributor

ppwaskie commented Aug 26, 2020 via email

@nhorman
Copy link
Member

nhorman commented Aug 26, 2020

I think @ppwaskie suggestion is a good one, at least to confirm that we now understand why and when the online attribute becomes present

As for the default state of a cpu, I still think we need to find a way to either make some assumptions surrounding what the lack of availability means for cpu presence (i.e. for non-bsp cpus, does the lack of an online attribute imply no presence?). Alternatively we have to find a way to definitively determine if a cpu is present or not (the topology directory is present on both arches, can you check to see if core_id is set to -1 or some such for cpus that aren't present?

@hdeller
Copy link

hdeller commented Aug 26, 2020 via email

@nhorman
Copy link
Member

nhorman commented Aug 26, 2020

sure you can email me at nhorman@tuxdriver.com

@ppwaskie
Copy link
Contributor

ppwaskie commented Aug 26, 2020 via email

@paride
Copy link
Contributor Author

paride commented Aug 26, 2020

On the 4-core parisc system we have for core_id:

paride@phantom:/sys/devices/system/cpu$ ls -1
cpu0
cpu1
cpu2
cpu3
cpu4
cpu5
cpu6
cpu7
hotplug
isolated
kernel_max
offline
online
possible
present
smt
uevent

paride@phantom:/sys/devices/system/cpu$ grep . cpu*/topology/* | grep core_id
cpu0/topology/core_id:0
cpu1/topology/core_id:1
cpu2/topology/core_id:0
cpu3/topology/core_id:1

while on the single core one we have:

paride@panama:/sys/devices/system/cpu$ ls -1
cpu0
cpu1
cpu2
cpu3
cpu4
cpu5
cpu6
cpu7
hotplug
isolated
kernel_max
offline
online
possible
present
smt
uevent

paride@panama:/sys/devices/system/cpu$ grep . cpu*/topology/* | grep core_id
cpu0/topology/core_id:0

So a non-empty core_id does seem a good indicator of presence. And it looks like that nonempty core_id and missing online attribute indicates an online cpu.

@hdeller
Copy link

hdeller commented Aug 26, 2020 via email

@nhorman
Copy link
Member

nhorman commented Aug 26, 2020

Except on my Fedora 32 x86_64 system, i don't have a state atrribute for any of my cpus, so that seems less than reliable as well

@paride
Copy link
Contributor Author

paride commented Aug 26, 2020

Just bringing in my patch above again... It's an indicator as well:

but core_id could be a lower level thing, not depending on the hotplug config option?

@hdeller
Copy link

hdeller commented Aug 27, 2020 via email

@nhorman
Copy link
Member

nhorman commented Aug 27, 2020

I'd really like to avoid that if I could, just to keep the code simple if possible. In that vien, I just noticed something. On my x86_64 system, there is a sysfs file /proc/devices/system/cpu/online. It offers an inclusion list in the format N-M, indicating which processors are online. Can you check your parisc system to see if it exists there as well? If so, perhaps that is a canonical way to determine which cpus are online accross arches and kernel configs.

@paride
Copy link
Contributor Author

paride commented Aug 27, 2020

On my x86_64 system with 8 cores (threads):

$ cat /sys/devices/system/cpu/online
0-7

On a single-core parisc machine:

paride@panama:~$ cat /sys/devices/system/cpu/online
0

On a 4-core parisc machine:

paride@phantom:~$ cat /sys/devices/system/cpu/online
0-3

https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-devices-system-cpu says that /sys/devices/system/cpu/online has "online: cpus that are online and being scheduled". The format is documented here: https://www.kernel.org/doc/Documentation/admin-guide/cputopology.rst. It really seems the right thing to look at.

@paride
Copy link
Contributor Author

paride commented Aug 27, 2020

On a PPC64 machine with SMT disabled:

$ cat /sys/devices/system/cpu/online 
0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152

Tested also on arm64 and s390x, it's always consistent.

@nhorman
Copy link
Member

nhorman commented Aug 27, 2020

Ok, thats excellent news. I think we have parsing code for that format of file as well, and it sounds like that specific online file is cross arch and cross config. I'll write up a patch today

nhorman pushed a commit that referenced this issue Aug 27, 2020
#159

recently brought to our attention that online cpu status isn't functional on all
arches.  Specifically on parisc, the availability of
/sys/devices/system/cpu/cpu<N>/online is in question.  The implication here is
that its not feasible to accurately determine cpu count, and as a result,
irqbalance doesn't work on that arch

Fix it by changing our online detection strategy.  The file
/sys/devices/system/cpu/online is a cpulist format file that seems to be present
accross all arches and configs.  As such, we can use this file to determine
online status per cpu reliably.

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
@nhorman
Copy link
Member

nhorman commented Aug 27, 2020

https://github.com/Irqbalance/irqbalance/tree/cpuonline

Everyone give that a shot, and let me know if it works for you. It seems to work well on my x86_64 system

@hdeller
Copy link

hdeller commented Aug 28, 2020

This is what I get on the 4-way PARISC box.
Let me know if I should test differently.

root@phantom:/home/deller/git/irqbalance# ./irqbalance -f -d -o
This machine seems not NUMA capable.
Isolated CPUs: 00000000
Adaptive-ticks CPUs: 00000000
Banned CPUs: 00000000
Package 1: numa_node -1 cpu mask is 0000000c (load 0)
Cache domain 0: numa_node is -1 cpu mask is 00000008 (load 0)
CPU number 3 numa_node is -1 (load 0)
Cache domain 2: numa_node is -1 cpu mask is 00000004 (load 0)
CPU number 2 numa_node is -1 (load 0)
Package 0: numa_node -1 cpu mask is 00000003 (load 0)
Cache domain 1: numa_node is -1 cpu mask is 00000002 (load 0)
CPU number 1 numa_node is -1 (load 0)
Cache domain 3: numa_node is -1 cpu mask is 00000001 (load 0)
CPU number 0 numa_node is -1 (load 0)
Adding IRQ 72 to database
Adding IRQ 66 to database
Adding IRQ 69 to database
Adding IRQ 71 to database
Adding IRQ 67 to database
Adding IRQ 70 to database
Adding IRQ 68 to database
Adding IRQ 73 to database
Adding IRQ 64 to database
Adding IRQ 65 to database
Adding IRQ 74 to database
NUMA NODE NUMBER: -1
LOCAL CPU MASK: ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff

@nhorman
Copy link
Member

nhorman commented Aug 28, 2020

can you attach the entire log? We want to see if we get 4 cpus that we can balance too

@nhorman
Copy link
Member

nhorman commented Aug 28, 2020

Actually, scratch that, the Cache domain dump seems to have correctly shown 4 unique cpu masks, so I think this is working. I'll merge it shortly

@nhorman
Copy link
Member

nhorman commented Aug 28, 2020

Fixed as per #163

@nhorman nhorman closed this as completed Aug 28, 2020
@paride
Copy link
Contributor Author

paride commented Aug 28, 2020

A bit late as the PR landed already, but here is the output on a POWER9 machine with SMP disabled.

This is a bug I'm really glad I reported, thanks to both of you!

$ ./irqbalance -f -d -o
This machine seems not NUMA capable.
Irqbalance hasn't been executed under root privileges, thus it won't in fact balance interrupts.
Isolated CPUs: 00000000
Adaptive-ticks CPUs: 00000000
Banned CPUs: 00000000
Package 16:  numa_node -1 cpu mask is 00010101,01010000,00000000,00000000 (load 0)
        Cache domain 0:  numa_node is -1 cpu mask is 01000000,00000000,00000000  (load 0) 
                CPU number 88  numa_node is -1 (load 0)
        Cache domain 2:  numa_node is -1 cpu mask is 00000001,00000000,00000000,00000000  (load 0) 
                CPU number 96  numa_node is -1 (load 0)
        Cache domain 5:  numa_node is -1 cpu mask is 00000100,00000000,00000000,00000000  (load 0) 
                CPU number 104  numa_node is -1 (load 0)
        Cache domain 7:  numa_node is -1 cpu mask is 00010000,00000000,00000000,00000000  (load 0) 
                CPU number 112  numa_node is -1 (load 0)
        Cache domain 13:  numa_node is -1 cpu mask is 00010000,00000000,00000000  (load 0) 
                CPU number 80  numa_node is -1 (load 0)
Package 17:  numa_node -1 cpu mask is 01010101,01000000,00000000,00000000,00000000 (load 0)
        Cache domain 1:  numa_node is -1 cpu mask is 00010000,00000000,00000000,00000000,00000000  (load 0) 
                CPU number 144  numa_node is -1 (load 0)
        Cache domain 3:  numa_node is -1 cpu mask is 01000000,00000000,00000000,00000000,00000000  (load 0) 
                CPU number 152  numa_node is -1 (load 0)
        Cache domain 9:  numa_node is -1 cpu mask is 01000000,00000000,00000000,00000000  (load 0) 
                CPU number 120  numa_node is -1 (load 0)
        Cache domain 17:  numa_node is -1 cpu mask is 00000001,00000000,00000000,00000000,00000000  (load 0) 
                CPU number 128  numa_node is -1 (load 0)
        Cache domain 18:  numa_node is -1 cpu mask is 00000100,00000000,00000000,00000000,00000000  (load 0) 
                CPU number 136  numa_node is -1 (load 0)
Package 1:  numa_node -1 cpu mask is 00000101,01010100,00000000 (load 0)
        Cache domain 4:  numa_node is -1 cpu mask is 00010000,00000000  (load 0) 
                CPU number 48  numa_node is -1 (load 0)
        Cache domain 6:  numa_node is -1 cpu mask is 01000000,00000000  (load 0) 
                CPU number 56  numa_node is -1 (load 0)
        Cache domain 8:  numa_node is -1 cpu mask is 00000001,00000000,00000000  (load 0) 
                CPU number 64  numa_node is -1 (load 0)
        Cache domain 11:  numa_node is -1 cpu mask is 00000100,00000000,00000000  (load 0) 
                CPU number 72  numa_node is -1 (load 0)
        Cache domain 15:  numa_node is -1 cpu mask is 00000100,00000000  (load 0) 
                CPU number 40  numa_node is -1 (load 0)
Package 0:  numa_node -1 cpu mask is 00000001,01010101 (load 0)
        Cache domain 10:  numa_node is -1 cpu mask is 00010000  (load 0) 
                CPU number 16  numa_node is -1 (load 0)
        Cache domain 12:  numa_node is -1 cpu mask is 01000000  (load 0) 
                CPU number 24  numa_node is -1 (load 0)
        Cache domain 14:  numa_node is -1 cpu mask is 00000001,00000000  (load 0) 
                CPU number 32  numa_node is -1 (load 0)
        Cache domain 16:  numa_node is -1 cpu mask is 00000100  (load 0) 
                CPU number 8  numa_node is -1 (load 0)
        Cache domain 19:  numa_node is -1 cpu mask is 00000001  (load 0) 
                CPU number 0  numa_node is -1 (load 0)
Adding IRQ 477 to database
Adding IRQ 456 to database
Adding IRQ 454 to database
Adding IRQ 457 to database
Adding IRQ 455 to database
Adding IRQ 397 to database
Adding IRQ 478 to database
Adding IRQ 493 to database
Adding IRQ 388 to database
Adding IRQ 390 to database
Adding IRQ 389 to database
Adding IRQ 387 to database
Adding IRQ 391 to database
Adding IRQ 494 to database
Adding IRQ 438 to database
Adding IRQ 428 to database
Adding IRQ 418 to database
Adding IRQ 446 to database
Adding IRQ 408 to database
Adding IRQ 436 to database
Adding IRQ 426 to database
Adding IRQ 398 to database
Adding IRQ 416 to database
Adding IRQ 444 to database
Adding IRQ 406 to database
Adding IRQ 434 to database
Adding IRQ 424 to database
Adding IRQ 452 to database
Adding IRQ 414 to database
Adding IRQ 442 to database
Adding IRQ 404 to database
Adding IRQ 432 to database
Adding IRQ 422 to database
Adding IRQ 450 to database
Adding IRQ 412 to database
Adding IRQ 440 to database
Adding IRQ 402 to database
Adding IRQ 430 to database
Adding IRQ 420 to database
Adding IRQ 449 to database
Adding IRQ 410 to database
Adding IRQ 439 to database
Adding IRQ 400 to database
Adding IRQ 429 to database
Adding IRQ 419 to database
Adding IRQ 447 to database
Adding IRQ 409 to database
Adding IRQ 437 to database
Adding IRQ 427 to database
Adding IRQ 399 to database
Adding IRQ 417 to database
Adding IRQ 445 to database
Adding IRQ 407 to database
Adding IRQ 435 to database
Adding IRQ 425 to database
Adding IRQ 453 to database
Adding IRQ 415 to database
Adding IRQ 443 to database
Adding IRQ 405 to database
Adding IRQ 433 to database
Adding IRQ 423 to database
Adding IRQ 451 to database
Adding IRQ 413 to database
Adding IRQ 441 to database
Adding IRQ 403 to database
Adding IRQ 431 to database
Adding IRQ 421 to database
Adding IRQ 411 to database
Adding IRQ 401 to database
Adding IRQ 448 to database
Adding IRQ 502 to database
Adding IRQ 509 to database
Adding IRQ 499 to database
Adding IRQ 517 to database
Adding IRQ 507 to database
Adding IRQ 515 to database
Adding IRQ 505 to database
Adding IRQ 513 to database
Adding IRQ 485 to database
Adding IRQ 503 to database
Adding IRQ 501 to database
Adding IRQ 508 to database
Adding IRQ 516 to database
Adding IRQ 506 to database
Adding IRQ 514 to database
Adding IRQ 504 to database
Adding IRQ 396 to database
Adding IRQ 394 to database
Adding IRQ 392 to database
Adding IRQ 395 to database
Adding IRQ 393 to database
Adding IRQ 492 to database
Adding IRQ 490 to database
Adding IRQ 489 to database
Adding IRQ 497 to database
Adding IRQ 495 to database
Adding IRQ 491 to database
Adding IRQ 498 to database
Adding IRQ 496 to database
Adding IRQ 386 to database
Adding IRQ 384 to database
Adding IRQ 382 to database
Adding IRQ 385 to database
Adding IRQ 383 to database
Adding IRQ 16 to database
Adding IRQ 17 to database
Adding IRQ 18 to database
Adding IRQ 19 to database
Adding IRQ 20 to database
Adding IRQ 21 to database
Adding IRQ 22 to database
Adding IRQ 23 to database
Adding IRQ 24 to database
Adding IRQ 25 to database
Adding IRQ 26 to database
Adding IRQ 27 to database
Adding IRQ 28 to database
Adding IRQ 29 to database
Adding IRQ 30 to database
Adding IRQ 62 to database
Adding IRQ 63 to database
Adding IRQ 227 to database
Adding IRQ 228 to database
Adding IRQ 231 to database
Adding IRQ 232 to database
Adding IRQ 247 to database
Adding IRQ 248 to database
Adding IRQ 483 to database
Adding IRQ 484 to database
Adding IRQ 487 to database
Adding IRQ 488 to database
Adding IRQ 500 to database
Adding IRQ 510 to database
Adding IRQ 511 to database
Adding IRQ 512 to database
NUMA NODE NUMBER: -1
LOCAL CPU MASK: ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff,ffffffff

Daemon couldn't be bound to the file-based socket.



-----------------------------------------------------------------------------
Package 16:  numa_node -1 cpu mask is 00010101,01010000,00000000,00000000 (load 0)
        Cache domain 0:  numa_node is -1 cpu mask is 01000000,00000000,00000000  (load 0) 
                CPU number 88  numa_node is -1 (load 0)
                  Interrupt 397 node_num is -1 (ethernet/0:0) 
                  Interrupt 403 node_num is -1 (ethernet/0:0) 
                  Interrupt 419 node_num is -1 (ethernet/0:0) 
                  Interrupt 406 node_num is -1 (ethernet/0:0) 
                  Interrupt 508 node_num is -1 (storage/0:2) 
          Interrupt 494 node_num is -1 (legacy/0:0) 
        Cache domain 2:  numa_node is -1 cpu mask is 00000001,00000000,00000000,00000000  (load 0) 
                CPU number 96  numa_node is -1 (load 0)
                  Interrupt 456 node_num is -1 (ethernet/0:0) 
                  Interrupt 423 node_num is -1 (ethernet/0:0) 
                  Interrupt 410 node_num is -1 (ethernet/0:0) 
                  Interrupt 426 node_num is -1 (ethernet/0:0) 
                  Interrupt 513 node_num is -1 (storage/0:2) 
        Cache domain 5:  numa_node is -1 cpu mask is 00000100,00000000,00000000,00000000  (load 0) 
                CPU number 104  numa_node is -1 (load 0)
                  Interrupt 384 node_num is -1 (ethernet/0:0) 
                  Interrupt 415 node_num is -1 (ethernet/0:0) 
                  Interrupt 402 node_num is -1 (ethernet/0:0) 
                  Interrupt 418 node_num is -1 (ethernet/0:0) 
                  Interrupt 517 node_num is -1 (storage/0:3) 
        Cache domain 7:  numa_node is -1 cpu mask is 00010000,00000000,00000000,00000000  (load 0) 
                CPU number 112  numa_node is -1 (load 0)
                  Interrupt 392 node_num is -1 (ethernet/0:0) 
                  Interrupt 407 node_num is -1 (ethernet/0:0) 
                  Interrupt 422 node_num is -1 (ethernet/0:0) 
                  Interrupt 387 node_num is -1 (ethernet/0:0) 
          Interrupt 496 node_num is -1 (legacy/0:0) 
        Cache domain 13:  numa_node is -1 cpu mask is 00010000,00000000,00000000  (load 0) 
                CPU number 80  numa_node is -1 (load 0)
                  Interrupt 401 node_num is -1 (ethernet/0:0) 
                  Interrupt 427 node_num is -1 (ethernet/0:0) 
                  Interrupt 414 node_num is -1 (ethernet/0:0) 
                  Interrupt 504 node_num is -1 (storage/0:2) 
          Interrupt 497 node_num is -1 (legacy/0:0) 
  Interrupt 24 node_num is -1 (other/0:0) 
  Interrupt 20 node_num is -1 (other/0:0) 
  Interrupt 16 node_num is -1 (other/0:468) 
  Interrupt 500 node_num is -1 (other/0:0) 
  Interrupt 483 node_num is -1 (other/0:0) 
  Interrupt 231 node_num is -1 (other/0:0) 
  Interrupt 62 node_num is -1 (other/0:0) 
  Interrupt 27 node_num is -1 (other/0:0) 
Package 17:  numa_node -1 cpu mask is 01010101,01000000,00000000,00000000,00000000 (load 0)
        Cache domain 1:  numa_node is -1 cpu mask is 00010000,00000000,00000000,00000000,00000000  (load 0) 
                CPU number 144  numa_node is -1 (load 0)
                  Interrupt 455 node_num is -1 (ethernet/0:0) 
                  Interrupt 441 node_num is -1 (ethernet/0:0) 
                  Interrupt 429 node_num is -1 (ethernet/0:0) 
                  Interrupt 444 node_num is -1 (ethernet/0:0) 
                  Interrupt 501 node_num is -1 (storage/0:2) 
          Interrupt 493 node_num is -1 (legacy/0:0) 
        Cache domain 3:  numa_node is -1 cpu mask is 01000000,00000000,00000000,00000000,00000000  (load 0) 
                CPU number 152  numa_node is -1 (load 0)
                  Interrupt 383 node_num is -1 (ethernet/0:6) 
                  Interrupt 433 node_num is -1 (ethernet/0:0) 
                  Interrupt 449 node_num is -1 (ethernet/0:0) 
                  Interrupt 436 node_num is -1 (ethernet/0:0) 
                  Interrupt 505 node_num is -1 (storage/0:0) 
        Cache domain 9:  numa_node is -1 cpu mask is 01000000,00000000,00000000,00000000  (load 0) 
                CPU number 120  numa_node is -1 (load 0)
                  Interrupt 386 node_num is -1 (ethernet/0:0) 
                  Interrupt 453 node_num is -1 (ethernet/0:0) 
                  Interrupt 440 node_num is -1 (ethernet/0:0) 
                  Interrupt 428 node_num is -1 (ethernet/0:0) 
                  Interrupt 499 node_num is -1 (storage/0:2) 
        Cache domain 17:  numa_node is -1 cpu mask is 00000001,00000000,00000000,00000000,00000000  (load 0) 
                CPU number 128  numa_node is -1 (load 0)
                  Interrupt 394 node_num is -1 (ethernet/0:0) 
                  Interrupt 445 node_num is -1 (ethernet/0:0) 
                  Interrupt 432 node_num is -1 (ethernet/0:0) 
                  Interrupt 389 node_num is -1 (ethernet/0:0) 
          Interrupt 498 node_num is -1 (legacy/0:0) 
        Cache domain 18:  numa_node is -1 cpu mask is 00000100,00000000,00000000,00000000,00000000  (load 0) 
                CPU number 136  numa_node is -1 (load 0)
                  Interrupt 411 node_num is -1 (ethernet/0:0) 
                  Interrupt 437 node_num is -1 (ethernet/0:0) 
                  Interrupt 452 node_num is -1 (ethernet/0:0) 
                  Interrupt 514 node_num is -1 (storage/0:2) 
          Interrupt 489 node_num is -1 (legacy/0:0) 
  Interrupt 23 node_num is -1 (other/0:0) 
  Interrupt 19 node_num is -1 (other/0:0) 
  Interrupt 512 node_num is -1 (other/0:0) 
  Interrupt 488 node_num is -1 (other/0:0) 
  Interrupt 248 node_num is -1 (other/0:0) 
  Interrupt 228 node_num is -1 (other/0:0) 
  Interrupt 30 node_num is -1 (other/0:0) 
  Interrupt 26 node_num is -1 (other/0:0) 
Package 1:  numa_node -1 cpu mask is 00000101,01010100,00000000 (load 0)
        Cache domain 4:  numa_node is -1 cpu mask is 00010000,00000000  (load 0) 
                CPU number 48  numa_node is -1 (load 0)
                  Interrupt 457 node_num is -1 (ethernet/0:0) 
                  Interrupt 413 node_num is -1 (ethernet/0:0) 
                  Interrupt 400 node_num is -1 (ethernet/0:0) 
                  Interrupt 416 node_num is -1 (ethernet/0:0) 
                  Interrupt 503 node_num is -1 (storage/0:2) 
          Interrupt 478 node_num is -1 (legacy/0:0) 
        Cache domain 6:  numa_node is -1 cpu mask is 01000000,00000000  (load 0) 
                CPU number 56  numa_node is -1 (load 0)
                  Interrupt 385 node_num is -1 (ethernet/0:0) 
                  Interrupt 405 node_num is -1 (ethernet/0:0) 
                  Interrupt 420 node_num is -1 (ethernet/0:0) 
                  Interrupt 408 node_num is -1 (ethernet/0:0) 
                  Interrupt 515 node_num is -1 (storage/0:3) 
        Cache domain 8:  numa_node is -1 cpu mask is 00000001,00000000,00000000  (load 0) 
                CPU number 64  numa_node is -1 (load 0)
                  Interrupt 393 node_num is -1 (ethernet/0:0) 
                  Interrupt 425 node_num is -1 (ethernet/0:0) 
                  Interrupt 412 node_num is -1 (ethernet/0:0) 
                  Interrupt 438 node_num is -1 (ethernet/0:0) 
                  Interrupt 509 node_num is -1 (storage/0:2) 
        Cache domain 11:  numa_node is -1 cpu mask is 00000100,00000000,00000000  (load 0) 
                CPU number 72  numa_node is -1 (load 0)
                  Interrupt 396 node_num is -1 (ethernet/0:0) 
                  Interrupt 417 node_num is -1 (ethernet/0:0) 
                  Interrupt 404 node_num is -1 (ethernet/0:0) 
                  Interrupt 390 node_num is -1 (ethernet/0:0) 
          Interrupt 491 node_num is -1 (legacy/0:0) 
        Cache domain 15:  numa_node is -1 cpu mask is 00000100,00000000  (load 0) 
                CPU number 40  numa_node is -1 (load 0)
                  Interrupt 421 node_num is -1 (ethernet/0:0) 
                  Interrupt 409 node_num is -1 (ethernet/0:0) 
                  Interrupt 424 node_num is -1 (ethernet/0:0) 
                  Interrupt 506 node_num is -1 (storage/0:3) 
          Interrupt 490 node_num is -1 (legacy/0:0) 
  Interrupt 22 node_num is -1 (other/0:0) 
  Interrupt 18 node_num is -1 (other/0:0) 
  Interrupt 511 node_num is -1 (other/0:0) 
  Interrupt 487 node_num is -1 (other/0:0) 
  Interrupt 247 node_num is -1 (other/0:0) 
  Interrupt 227 node_num is -1 (other/0:0) 
  Interrupt 29 node_num is -1 (other/0:0) 
  Interrupt 25 node_num is -1 (other/0:0) 
Package 0:  numa_node -1 cpu mask is 00000001,01010101 (load 0)
        Cache domain 10:  numa_node is -1 cpu mask is 00010000  (load 0) 
                CPU number 16  numa_node is -1 (load 0)
                  Interrupt 454 node_num is -1 (ethernet/0:0) 
                  Interrupt 451 node_num is -1 (ethernet/0:0) 
                  Interrupt 439 node_num is -1 (ethernet/0:0) 
                  Interrupt 398 node_num is -1 (ethernet/0:164) 
                  Interrupt 485 node_num is -1 (storage/0:3) 
          Interrupt 477 node_num is -1 (legacy/0:0) 
        Cache domain 12:  numa_node is -1 cpu mask is 01000000  (load 0) 
                CPU number 24  numa_node is -1 (load 0)
                  Interrupt 382 node_num is -1 (ethernet/0:8) 
                  Interrupt 443 node_num is -1 (ethernet/0:0) 
                  Interrupt 430 node_num is -1 (ethernet/0:0) 
                  Interrupt 446 node_num is -1 (ethernet/0:0) 
                  Interrupt 507 node_num is -1 (storage/0:2) 
        Cache domain 14:  numa_node is -1 cpu mask is 00000001,00000000  (load 0) 
                CPU number 32  numa_node is -1 (load 0)
                  Interrupt 395 node_num is -1 (ethernet/0:0) 
                  Interrupt 435 node_num is -1 (ethernet/0:0) 
                  Interrupt 450 node_num is -1 (ethernet/0:0) 
                  Interrupt 391 node_num is -1 (ethernet/0:0) 
                  Interrupt 502 node_num is -1 (storage/0:2) 
        Cache domain 16:  numa_node is -1 cpu mask is 00000100  (load 0) 
                CPU number 8  numa_node is -1 (load 0)
                  Interrupt 448 node_num is -1 (ethernet/0:0) 
                  Interrupt 399 node_num is -1 (ethernet/0:0) 
                  Interrupt 442 node_num is -1 (ethernet/0:0) 
                  Interrupt 388 node_num is -1 (ethernet/0:0) 
          Interrupt 495 node_num is -1 (legacy/0:0) 
        Cache domain 19:  numa_node is -1 cpu mask is 00000001  (load 0) 
                CPU number 0  numa_node is -1 (load 0)
                  Interrupt 431 node_num is -1 (ethernet/0:0) 
                  Interrupt 447 node_num is -1 (ethernet/0:0) 
                  Interrupt 434 node_num is -1 (ethernet/0:0) 
                  Interrupt 516 node_num is -1 (storage/0:3) 
          Interrupt 492 node_num is -1 (legacy/0:0) 
  Interrupt 21 node_num is -1 (other/0:0) 
  Interrupt 17 node_num is -1 (other/0:1) 
  Interrupt 510 node_num is -1 (other/0:0) 
  Interrupt 484 node_num is -1 (other/0:0) 
  Interrupt 232 node_num is -1 (other/0:0) 
  Interrupt 63 node_num is -1 (other/0:0) 
  Interrupt 28 node_num is -1 (other/0:0) 

@nhorman
Copy link
Member

nhorman commented Aug 28, 2020

Sorry, I should have waited, but the power run looks correct to me, as does my local run on x86_64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants