New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: default VM topology to use cores instead of sockets #155
Comments
This is a no-brainer really. If we look at the direction physical hardware has gone, we see massive core counts per socket, while the number of sockets is generally very low. When multiple sockets are present, then are essentially always associated with distinct NUMA nodes to partition memory. IOW, modern OS expect to see massive core counts, and low socket counts. Our current default of massive socket counts and 1 core is a poor match for this. Add in licensing restrictons and it gets worse. If we consider that VMs by default are permitted to float freely across the host CPUs, then by implication the guest RAM region(s) can be allocated arbitrarily across the host NUMA nodes (if any) and there is no guaranteed affinity with vCPUs. As such, with floating VMs, there's no benefit to trying to create virtual NUMA nodes and thus also no benefit to using multiple sockets. Considering threads on the other hand, if we report threads >1, this triggers special logic in OS schedulars wrt placing tasks on thread siblings. This logic is only going to have a positive effect if the guest thread siblings are pinned to host thread siblings. If there's no pinning, then reporting threads > 1 is likely actively harmful to guest performance. IOW:
So I'd say virt-manager should switch to cores == vCPUs right now. There's possibly also an argument for making QEMU switch to use cores == vCPUs by default with new machine types, but these kind of discussions in QEMU can be a can of worms. So I wouldn't wait for QEMU to change just set it in virt-manager, which will benefit everyone immediately. |
Surprisingly this had an easy pass, and so QEMU 6.2 will end up defaulting to using cores==vCPUs by default for new machine types. virt-manager should still do the same as this explicitly. |
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.
Some OS will even refuse to use sockets over a certain count.
Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.
This matches a recent change made in QEMU for new machine types
commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
Author: Yanan Wang <wangyanan55@huawei.com>
Date: Wed Sep 29 10:58:09 2021 +0800
machine: Prefer cores over sockets in smp parsing since 6.2
Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Similarly to our default when creating VMs, if changing vCPU counts on an existing VM we want to reflect this as cores in the topology UI. Closes: virt-manager#155 Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
I just had to re-setup my Windows VM and ran into this issue again. I am glad to see that QEMU made this change, and yes I agree that virt-manager should use cores instead of sockets by default for vCPUs as well. |
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.
Some OS will even refuse to use sockets over a certain count.
Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.
This matches a recent change made in QEMU for new machine types
commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
Author: Yanan Wang <wangyanan55@huawei.com>
Date: Wed Sep 29 10:58:09 2021 +0800
machine: Prefer cores over sockets in smp parsing since 6.2
Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Similarly to our default when creating VMs, if changing vCPU counts on an existing VM we want to reflect this as cores in the topology UI. Closes: virt-manager#155 Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.
Some OS will even refuse to use sockets over a certain count.
Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.
This matches a recent change made in QEMU for new machine types
commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
Author: Yanan Wang <wangyanan55@huawei.com>
Date: Wed Sep 29 10:58:09 2021 +0800
machine: Prefer cores over sockets in smp parsing since 6.2
Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Similarly to our default when creating VMs, if changing vCPU counts on an existing VM we want to reflect this as cores in the topology UI. Closes: virt-manager#155 Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.
Some OS will even refuse to use sockets over a certain count.
Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.
This matches a recent change made in QEMU for new machine types
commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
Author: Yanan Wang <wangyanan55@huawei.com>
Date: Wed Sep 29 10:58:09 2021 +0800
machine: Prefer cores over sockets in smp parsing since 6.2
Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Similarly to our default when creating VMs, if changing vCPU counts on an existing VM we want to reflect this as cores in the topology UI. Closes: virt-manager#155 Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.
Some OS will even refuse to use sockets over a certain count.
Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.
This matches a recent change made in QEMU for new machine types
commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
Author: Yanan Wang <wangyanan55@huawei.com>
Date: Wed Sep 29 10:58:09 2021 +0800
machine: Prefer cores over sockets in smp parsing since 6.2
Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Similarly to our default when creating VMs, if changing vCPU counts on an existing VM we want to reflect this as cores in the topology UI. Closes: virt-manager#155 Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.
Some OS will even refuse to use sockets over a certain count.
Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.
This matches a recent change made in QEMU for new machine types
commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
Author: Yanan Wang <wangyanan55@huawei.com>
Date: Wed Sep 29 10:58:09 2021 +0800
machine: Prefer cores over sockets in smp parsing since 6.2
Closes: #155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
Re-opened because not all commits in #321 are merged. While the code prefers cores when a topology is present, it doesn't actually create a topology by default so it has minimal effect. |
Similarly to our default when creating VMs, if changing vCPU counts on an existing VM we want to reflect this as cores in the topology UI. Closes: virt-manager#155 Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
I laid out my thoughts in the PR a while back: #321 (comment) with qemu 6.2+ machine types we get this for free, cores are used over sockets. I think that's enough here |
Originally filed here: https://bugzilla.redhat.com/show_bug.cgi?id=1095323
virt-manager/virt-install translates a request for ex. 4 VCPUs to be a topology of 4 sockets * 1 core * 1 thread. However certain guest OS like Windows versions can have limits on the number of usable CPU sockets. We need to handle this in a more intelligent way.
There's a decent amount of discussion in the original bug. Latest suggestion is just to use a topology of 1 socket * X cores * 1 thread. But I will start a wider virt discussion before we move on this.
The text was updated successfully, but these errors were encountered: