Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFE: default VM topology to use cores instead of sockets #155

Closed
crobinso opened this issue Sep 16, 2020 · 5 comments
Closed

RFE: default VM topology to use cores instead of sockets #155

crobinso opened this issue Sep 16, 2020 · 5 comments
Labels
blocked Needs work elsewhere before we can proceed (libvirt, qemu, design, discussion, etc)

Comments

@crobinso
Copy link
Member

Originally filed here: https://bugzilla.redhat.com/show_bug.cgi?id=1095323

virt-manager/virt-install translates a request for ex. 4 VCPUs to be a topology of 4 sockets * 1 core * 1 thread. However certain guest OS like Windows versions can have limits on the number of usable CPU sockets. We need to handle this in a more intelligent way.

There's a decent amount of discussion in the original bug. Latest suggestion is just to use a topology of 1 socket * X cores * 1 thread. But I will start a wider virt discussion before we move on this.

@crobinso crobinso added the blocked Needs work elsewhere before we can proceed (libvirt, qemu, design, discussion, etc) label Sep 16, 2020
@berrange
Copy link
Contributor

berrange commented Apr 9, 2021

Latest suggestion is just to use a topology of 1 socket * X cores * 1 thread.

This is a no-brainer really.

If we look at the direction physical hardware has gone, we see massive core counts per socket, while the number of sockets is generally very low. When multiple sockets are present, then are essentially always associated with distinct NUMA nodes to partition memory. IOW, modern OS expect to see massive core counts, and low socket counts. Our current default of massive socket counts and 1 core is a poor match for this. Add in licensing restrictons and it gets worse.

If we consider that VMs by default are permitted to float freely across the host CPUs, then by implication the guest RAM region(s) can be allocated arbitrarily across the host NUMA nodes (if any) and there is no guaranteed affinity with vCPUs. As such, with floating VMs, there's no benefit to trying to create virtual NUMA nodes and thus also no benefit to using multiple sockets.

Considering threads on the other hand, if we report threads >1, this triggers special logic in OS schedulars wrt placing tasks on thread siblings. This logic is only going to have a positive effect if the guest thread siblings are pinned to host thread siblings. If there's no pinning, then reporting threads > 1 is likely actively harmful to guest performance.

IOW:

  • With floating VM vCPUs, we want sockets ==1 and threads == 1. This means that we should always use cores == vCPUs.
  • With pinned VM vCPUS, we want topology reported to match the topology of the host pCPUs we've pinned to.

So I'd say virt-manager should switch to cores == vCPUs right now.

There's possibly also an argument for making QEMU switch to use cores == vCPUs by default with new machine types, but these kind of discussions in QEMU can be a can of worms. So I wouldn't wait for QEMU to change just set it in virt-manager, which will benefit everyone immediately.

@berrange
Copy link
Contributor

There's possibly also an argument for making QEMU switch to use cores == vCPUs by default with new machine types, but these kind of discussions in QEMU can be a can of worms. So I wouldn't wait for QEMU to change just set it in virt-manager, which will benefit everyone immediately.

Surprisingly this had an easy pass, and so QEMU 6.2 will end up defaulting to using cores==vCPUs by default for new machine types.

commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
Author: Yanan Wang <wangyanan55@huawei.com>
Date:   Wed Sep 29 10:58:09 2021 +0800

    machine: Prefer cores over sockets in smp parsing since 6.2
    
    In the real SMP hardware topology world, it's much more likely that
    we have high cores-per-socket counts and few sockets totally. While
    the current preference of sockets over cores in smp parsing results
    in a virtual cpu topology with low cores-per-sockets counts and a
    large number of sockets, which is just contrary to the real world.
    
    Given that it is better to make the virtual cpu topology be more
    reflective of the real world and also for the sake of compatibility,
    we start to prefer cores over sockets over threads in smp parsing
    since machine type 6.2 for different arches.
    
    In this patch, a boolean "smp_prefer_sockets" is added, and we only
    enable the old preference on older machines and enable the new one
    since type 6.2 for all arches by using the machine compat mechanism.
    
    Suggested-by: Daniel P. Berrange <berrange@redhat.com>
    Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
    Acked-by: David Gibson <david@gibson.dropbear.id.au>
    Acked-by: Cornelia Huck <cohuck@redhat.com>
    Reviewed-by: Andrew Jones <drjones@redhat.com>
    Reviewed-by: Pankaj Gupta <pankaj.gupta@ionos.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
    Message-Id: <20210929025816.21076-10-wangyanan55@huawei.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

virt-manager should still do the same as this explicitly.

berrange added a commit to berrange/virt-manager that referenced this issue Oct 29, 2021
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.

Some OS will even refuse to use sockets over a certain count.

Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.

This matches a recent change made in QEMU for new machine types

  commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
  Author: Yanan Wang <wangyanan55@huawei.com>
  Date:   Wed Sep 29 10:58:09 2021 +0800

    machine: Prefer cores over sockets in smp parsing since 6.2

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
berrange added a commit to berrange/virt-manager that referenced this issue Oct 29, 2021
Similarly to our default when creating VMs, if changing vCPU counts on
an existing VM we want to reflect this as cores in the topology UI.

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
@Be-ing
Copy link

Be-ing commented Nov 1, 2021

I just had to re-setup my Windows VM and ran into this issue again. I am glad to see that QEMU made this change, and yes I agree that virt-manager should use cores instead of sockets by default for vCPUs as well.

berrange added a commit to berrange/virt-manager that referenced this issue Nov 10, 2021
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.

Some OS will even refuse to use sockets over a certain count.

Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.

This matches a recent change made in QEMU for new machine types

  commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
  Author: Yanan Wang <wangyanan55@huawei.com>
  Date:   Wed Sep 29 10:58:09 2021 +0800

    machine: Prefer cores over sockets in smp parsing since 6.2

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
berrange added a commit to berrange/virt-manager that referenced this issue Nov 10, 2021
Similarly to our default when creating VMs, if changing vCPU counts on
an existing VM we want to reflect this as cores in the topology UI.

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
berrange added a commit to berrange/virt-manager that referenced this issue Nov 10, 2021
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.

Some OS will even refuse to use sockets over a certain count.

Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.

This matches a recent change made in QEMU for new machine types

  commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
  Author: Yanan Wang <wangyanan55@huawei.com>
  Date:   Wed Sep 29 10:58:09 2021 +0800

    machine: Prefer cores over sockets in smp parsing since 6.2

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
berrange added a commit to berrange/virt-manager that referenced this issue Nov 10, 2021
Similarly to our default when creating VMs, if changing vCPU counts on
an existing VM we want to reflect this as cores in the topology UI.

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
berrange added a commit to berrange/virt-manager that referenced this issue Jan 11, 2022
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.

Some OS will even refuse to use sockets over a certain count.

Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.

This matches a recent change made in QEMU for new machine types

  commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
  Author: Yanan Wang <wangyanan55@huawei.com>
  Date:   Wed Sep 29 10:58:09 2021 +0800

    machine: Prefer cores over sockets in smp parsing since 6.2

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
berrange added a commit to berrange/virt-manager that referenced this issue Jan 11, 2022
Similarly to our default when creating VMs, if changing vCPU counts on
an existing VM we want to reflect this as cores in the topology UI.

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
berrange added a commit to berrange/virt-manager that referenced this issue Jan 18, 2022
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.

Some OS will even refuse to use sockets over a certain count.

Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.

This matches a recent change made in QEMU for new machine types

  commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
  Author: Yanan Wang <wangyanan55@huawei.com>
  Date:   Wed Sep 29 10:58:09 2021 +0800

    machine: Prefer cores over sockets in smp parsing since 6.2

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
berrange added a commit to berrange/virt-manager that referenced this issue Jan 18, 2022
Similarly to our default when creating VMs, if changing vCPU counts on
an existing VM we want to reflect this as cores in the topology UI.

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
github-actions bot pushed a commit that referenced this issue Jan 21, 2022
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.

Some OS will even refuse to use sockets over a certain count.

Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.

This matches a recent change made in QEMU for new machine types

  commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
  Author: Yanan Wang <wangyanan55@huawei.com>
  Date:   Wed Sep 29 10:58:09 2021 +0800

    machine: Prefer cores over sockets in smp parsing since 6.2

Closes: #155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
@berrange berrange reopened this Jan 21, 2022
@berrange
Copy link
Contributor

Re-opened because not all commits in #321 are merged. While the code prefers cores when a topology is present, it doesn't actually create a topology by default so it has minimal effect.

berrange added a commit to berrange/virt-manager that referenced this issue Jan 21, 2022
Similarly to our default when creating VMs, if changing vCPU counts on
an existing VM we want to reflect this as cores in the topology UI.

Closes: virt-manager#155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
@crobinso
Copy link
Member Author

I laid out my thoughts in the PR a while back: #321 (comment)

with qemu 6.2+ machine types we get this for free, cores are used over sockets. I think that's enough here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked Needs work elsewhere before we can proceed (libvirt, qemu, design, discussion, etc)
Projects
None yet
Development

No branches or pull requests

3 participants