Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvm: Add support for cgroupv2 #8252

Merged
merged 3 commits into from Dec 13, 2023
Merged

Conversation

BryanMLima
Copy link
Contributor

@BryanMLima BryanMLima commented Nov 20, 2023

Description

1. Problem description

In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.

  • Equation 1
    shares = CPU * speed

Fixes: #6744

2. Proposed changes

To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.

The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.

  • Equation 2
    shares = (VM requested shares * cgroup upper limit)/host max shares

To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.

To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.

It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.

Example

To exemplify the proposed changes, consider a host with the following specification: 32 CPU cores with nominal speed of 2 GHz; and a VM with a compute offering with 8 CPU cores and with speed of 2 GHz. With the current ACS implementation, the shares of the VM would be calculated as Equation 1. Thus, the VM shares would be 16000, over the cgroup v2 limit of 10000.

With the proposed changes, the VM shares would be calculated as Equation 2. In this example, VM requested shares is 16000, cgroup upper limit is fixed with a value of 10000, and host max shares is 64000. Therefore, the VM shares results in 2500, well below the cgroup v2 limit.

Real case scenarios

To demonstrate real case scenarios, consider the following hosts:

  • Host A
    • # of Cores: 32
    • CPU nominal frequency: 2 GHz
    • Max Shares: 64000
  • Host B
    • # of Cores: 16
    • CPU nominal frequency: 2 GHz
    • Max Shares: 32000

Table 1 below presents a set of VMs with their requested resources, alongside the shares values considering the current implementation, and the new shares value, for each host, considering the proposed change using Equation 2.

  • Table 1
VM CPU cores CPU frequency (GHz) Current shares New shares
For Host A For Host B
VM 1 2 2 4000 625 1250
VM 2 4 2 8000 1250 2500
VM 3 6 2 12000 1875 3750
VM 4 8 2 16000 2500 5000
VM 5 16 2 32000 5000 10000
VM 6 32 2 64000 10000 20000

Table 2 below presents if the same VMs in Table 1 would be allowed to be allocated to a given host, or if an exception would be thrown, considering current and proposed implementations. As we can see, with the current ACS implementation, VMs 3 through 6 would throw an exception when deploying in host A; even though the host has enough resources. VM 6 should throw an exception when trying to deploy it in host B in both implementations, as the host does not have enough resources to allocate it.

  • Table 2
VM Host A
Host B
Current Implementation Proposed Implementation Current Implementation Proposed Implementation
VM 1 Allowed Allowed Allowed Allowed
VM 2 Allowed Allowed Allowed Allowed
VM 3 Exception Allowed Exception Allowed
VM 4 Exception Allowed Exception Allowed
VM 5 Exception Allowed Exception Allowed
VM 6 Exception Allowed Exception Exception

It is important to note that Equation 2 rounds up the shares value; thus, there is a precision loss with the conversion. Nevertheless, this precision loss should not be noticeable to the end user, as the shares value would need to be in a very close interval, e.g. shares values of 3997, 3998 and 3999 would be considered as 1249 in host B with the new implementation. However, the precision loss is a small drawback for enabling support of cgroup v2 to ACS.

3. Future works

With the current proposal, only cgroups version 2 is addressed, as it has impactful limitations. Thus, as future work, cgroups version 1 will also be addressed using the same strategy of linear scale conversion.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

How Has This Been Tested?

Consider the host host-1 with cgroup v2, host-2 with cgroup v1 and the following custom constrained compute offering below:

  • Frequency: 2 GHz
  • Cores: 2 - 6
  • RAM: 1 GB

Deploy of VMs

I created the VM vm-tmpfs with 5 cores and allocated it to host host-2.

ubuntu@host-2:~$ virsh dumpxml --domain i-2-13-VM | grep shares
    <shares>10000</shares>

I created the VM vm-cgroupv2 with 5 cores and allocated it to host host-1.

ubuntu@host-1:~$ virsh dumpxml --domain i-2-15-VM | grep shares
    <shares>6667</shares>

As expected, ACS considered the host resources to set the shares values. When the host utilizes the cgroup v1, the default behavior is not changed.

VM live scale

I lived scale the VM vm-tmpfs, changing its number of cores from 5 to 6.

ubuntu@host-2:~$ virsh dumpxml --domain i-2-13-VM | grep shares
    <shares>10000</shares>
ubuntu@host-2:~$ virsh dumpxml --domain i-2-13-VM | grep shares
    <shares>12000</shares>

I lived scale the VM vm-cgroupv2, changing its number of cores from 5 to 6.

ubuntu@host-1:~$ virsh dumpxml --domain i-2-15-VM | grep shares
    <shares>6667</shares>
ubuntu@host-1:~$ virsh dumpxml --domain i-2-15-VM | grep shares
    <shares>8000</shares>

As expected, ACS considered the host resources to set the shares values. When the host utilizes the cgroup v1, the default behavior is not changed.

VM migration

I migrated VM vm-tmpfs from host host-2 to host host-1 (from cgroupv1 to cgroupv2). After the migration, the shares values was changed to 8000, as expected.

ubuntu@host-1:~$ virsh dumpxml --domain i-2-13-VM | grep shares
    <shares>8000</shares>

I migrated VM vm-cgroupv2 from host host-1 to host host-2 (from cgroupv2 to cgroupv1). After the migration, the shares values was changed to 12000, as expected.

ubuntu@host-2:~$ virsh dumpxml --domain i-2-15-VM | grep shares
    <shares>12000</shares>

How did you try to break this feature and the system with this change?

I migrated VMs between hosts with different cgroup versions, the VM migration section above describes this.

Copy link

codecov bot commented Nov 20, 2023

Codecov Report

Attention: 936 lines in your changes are missing coverage. Please review.

Comparison is base (29c7b31) 13.02% compared to head (96e74fd) 13.13%.
Report is 122 commits behind head on 4.18.

❗ Current head 96e74fd differs from pull request most recent head 9af213d. Consider uploading reports for the commit 9af213d to get more accurate results

Files Patch % Lines
...n/java/com/cloud/network/IpAddressManagerImpl.java 11.01% 105 Missing ⚠️
...ava/com/cloud/upgrade/dao/Upgrade41800to41810.java 1.05% 94 Missing ⚠️
...java/com/cloud/agent/manager/AgentManagerImpl.java 0.00% 66 Missing ⚠️
...src/main/java/com/cloud/upgrade/GuestOsMapper.java 20.73% 65 Missing ⚠️
...ud/hypervisor/kvm/storage/KVMStorageProcessor.java 0.00% 47 Missing and 1 partial ⚠️
...ain/java/com/cloud/api/query/QueryManagerImpl.java 0.00% 35 Missing ⚠️
...n/java/com/cloud/vm/VirtualMachineManagerImpl.java 9.09% 29 Missing and 1 partial ⚠️
...ain/java/com/cloud/storage/dao/GuestOSDaoImpl.java 0.00% 28 Missing ⚠️
...in/java/com/cloud/server/ManagementServerImpl.java 0.00% 24 Missing ⚠️
.../apache/cloudstack/vm/UnmanagedVMsManagerImpl.java 48.93% 19 Missing and 5 partials ⚠️
... and 59 more
Additional details and impacted files
@@             Coverage Diff              @@
##               4.18    #8252      +/-   ##
============================================
+ Coverage     13.02%   13.13%   +0.10%     
- Complexity     9032     9141     +109     
============================================
  Files          2720     2720              
  Lines        257080   257710     +630     
  Branches      40088    40172      +84     
============================================
+ Hits          33476    33838     +362     
- Misses       219400   219582     +182     
- Partials       4204     4290      +86     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@DaanHoogland
Copy link
Contributor

Thanks @BryanMLima , for picking this up and for the extensive explanation. I have two questions:

  • where you say 6 cores at 2 GHz, the shares value will be 12000 I would say huh as it would either be 12 or 12.000.000.000. What is the factorial used there? (I probably just need a link to a definition)
  • I see no considerations on live systems, i.e. upgrades. Please, expand on that. Will it have any consequence, or will it be seamless?

regards,

@DaanHoogland
Copy link
Contributor

Oh, and number 3

  • How will this work in mixed systems, with old hosts using cgroups v1 and newer hosts using cgroups v2

@BryanMLima
Copy link
Contributor Author

Thanks @BryanMLima , for picking this up and for the extensive explanation. I have two questions:

* where you say `6 cores at 2 GHz, the shares value will be 12000 ` I would say `huh` as it would either be 12 or 12.000.000.000. What is the factorial used there? (I probably just need a link to a definition)

* I see no considerations on live systems, i.e. upgrades. Please, expand on that. Will it have any consequence, or will it be seamless?

regards,

@DaanHoogland, regarding the first question, ACS calculates the shares by multiplying the frequency by the number of cores, both specified in the compute offering; this is done in method LibvirtComputingResource#createCpuTuneDef. Therefore, 6 cores * 2000 MHz (2 Ghz) results in 12,000 shares. This is the current behaviour of ACS, and the PR does not change it for cgroups v1; this PR only change the way ACS calculates the shares for hosts that use cgroups v2. OpenStack1 solved this same problem by not setting the shares value of all VMs, allowing advanced users to set willingly. TBH, I agree with this approach, setting the shares value like ACS does is misleading, as a more experienced user may question how ACS limits the CPU frequency of VMs. AFAIK, no hypervisor does this, the frequency of the VM will display the host's CPU frequency; hypervisors will only limit the CPU access time (and burst limits) of a VM to “simulate” the specified frequency.

About the second question, I think I did not understand it fully; could you add more details? By live systems, do you mean hosts or VMs?
I am assuming you mean upgrading a host from cgroupv1 to cgroupv2 (or even downgrading from cgroupv1 to cgroupv2). The core of this strategy happens in the LibvirtComputingResource#initialize() which is called when the cloudstack-agent service is (re)started. It is required to reboot the system when changing the version of cgroup, thus, when the cloudstack-agent service starts, ACS will check the version of the cgroup (stat -fc %T /sys/fs/cgroup/) and it will set its maximum CPU shares capacity. With this, ACS will always have the updated version of the cgroup utilized by the host.

Oh, and number 3

* How will this work in mixed systems, with old hosts using cgroups v1 and newer hosts using cgroups v2

This PR already address the migration of VMs between hosts with different versions, as the shares value is calculated in this process considering the VM's host destination. Thus, two VMs with the exact same compute offering will have different shares values for cgroups v1 and v2. The shares value is only a proportional weighted; as long as all VMs in the same hosts are in the same scale, the CPU time will be distributed accordingly. If the shares value is not set in the domain XML for libvirt (this never happens in ACS, it is always set), it will use the OS default value, which, for cgroupv2, is 1002; thus, the default behaviour for processes in the same cgroup is to have proportional CPU access time.

This PR, however, does not address updating the shares of VMs on hosts with cgroupv2 that are already running, requiring restarting, migrating or scaling the VM.

Footnotes

  1. https://review.opendev.org/c/openstack/nova/+/824048?tab=comments

  2. https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html#weights

@DaanHoogland
Copy link
Contributor

thanks @BryanMLima good work.

Copy link
Member

@weizhouapache weizhouapache left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code lgtm

not tested yet

@weizhouapache weizhouapache changed the title Add support for cgroupv2 kvm: Add support for cgroupv2 Nov 27, 2023
@shwstppr
Copy link
Contributor

shwstppr commented Dec 1, 2023

@blueorangutan package

@blueorangutan
Copy link

@shwstppr a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 7890

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

@rohityadavcloud
Copy link
Member

@blueorangutan package

@blueorangutan
Copy link

@rohityadavcloud a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@mlsorensen
Copy link
Contributor

This makes sense from an implementation perspective. Have not dug into the code in great detail.

Copy link
Contributor

@mlsorensen mlsorensen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shares are all relative, so as long as we calculate some proportional scale that is compatible with the KVM host, it should work.

@DaanHoogland DaanHoogland removed their request for review December 6, 2023 08:36
Copy link
Contributor

@shwstppr shwstppr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code LGTM, will need testing.

@BryanMLima @DaanHoogland @rohityadavcloud @weizhouapache should this be considered for 4.19? I think we should

@DaanHoogland
Copy link
Contributor

Code LGTM, will need testing.

@BryanMLima @DaanHoogland @rohityadavcloud @weizhouapache should this be considered for 4.19? I think we should

agreed, or even for the 4.18 branch

@weizhouapache weizhouapache added this to the 4.19.0.0 milestone Dec 6, 2023
@weizhouapache
Copy link
Member

Code LGTM, will need testing.
@BryanMLima @DaanHoogland @rohityadavcloud @weizhouapache should this be considered for 4.19? I think we should

agreed, or even for the 4.18 branch

yes, @DaanHoogland @shwstppr
4.19 + optionally 4.18

@BryanMLima BryanMLima changed the base branch from main to 4.18 December 6, 2023 20:28
@weizhouapache
Copy link
Member

@blueorangutan package

@blueorangutan
Copy link

@weizhouapache a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✖️ el7 ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 7956

@shwstppr shwstppr closed this Dec 7, 2023
@shwstppr shwstppr reopened this Dec 7, 2023
@vladimirpetrov
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@vladimirpetrov a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✖️ el7 ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 7964

@DaanHoogland
Copy link
Contributor

@BryanMLima canyou check https://github.com/apache/cloudstack/actions/runs/7124755734/job/19399504808?pr=8252#step:7:15647 ?

@vladimirpetrov
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@vladimirpetrov a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 7973

@DaanHoogland
Copy link
Contributor

@blueorangutan test alma9 kvm-alma9

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (alma9 mgmt + kvm-alma9) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-8523)
Environment: kvm-alma9 (x2), Advanced Networking with Mgmt server a9
Total time taken: 52185 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr8252-t8523-kvm-alma9.zip
Smoke tests completed. 107 look OK, 2 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_02_upgrade_kubernetes_cluster Failure 482.08 test_kubernetes_clusters.py
test_01_migrate_VM_and_root_volume Error 94.72 test_vm_life_cycle.py
test_02_migrate_VM_with_two_data_disks Error 57.01 test_vm_life_cycle.py

@BryanMLima
Copy link
Contributor Author

@blueorangutan package

@blueorangutan
Copy link

@BryanMLima a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el7 ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 8016

@shwstppr
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@shwstppr a [SL] Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests

Copy link
Contributor

@vladimirpetrov vladimirpetrov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM based on manual testing against 4.19 and 4.18.

Testing with Ubuntu 22 environment, host with 6 CPUs x 2000 MHz, the caluclated shares is 9524 (overprovisioning factor 1). Without the fix it's 12000 and deployment fails.

Just one clarification - the actual formula for calculating shares is

shares = NUMBER_CPUS * MIN_SPEED_OR_SPEED (SPEED is a legacy parameter for compatibility with ACS 4.0/4.1)

when MIN_SPEED is calculated like this:

int minspeed = (int)(offering.getSpeed() / (divideCpuByOverprovisioning ? vmProfile.getCpuOvercommitRatio() : 1));

So the overprovisioning factor is also used in the equation.

@shwstppr
Copy link
Contributor

Let's merge this once integration test results are in

@shwstppr
Copy link
Contributor

Test results from backend,

SF] Trillian test result (tid-8550)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 44744 seconds
Marvin logs: [https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr8252-t8550-kvm-centos7.zip
Smoke](https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr8252-t8550-kvm-centos7.zip
Smoke) tests completed. 108 look OK, 1 have errors, 0 did not run
Only failed and skipped tests results shown below:


Test | Result | Time (s) | Test File
--- | --- | --- | ---
test_07_deploy_kubernetes_ha_cluster | `Failure` | 3618.04 | test_kubernetes_clusters.py
test_08_upgrade_kubernetes_ha_cluster | `Failure` | 0.04 | test_kubernetes_clusters.py
test_09_delete_kubernetes_ha_cluster | `Failure` | 0.04 | test_kubernetes_clusters.py
test_10_vpc_tier_kubernetes_cluster | `Failure` | 50.89 | test_kubernetes_clusters.py
ContextSuite context=TestKubernetesCluster>:teardown | `Error` | 117.53 | test_kubernetes_clusters.py

@shwstppr shwstppr merged commit 3bb318b into apache:4.18 Dec 13, 2023
23 of 25 checks passed
dhslove pushed a commit to ablecloud-team/ablestack-cloud that referenced this pull request Dec 15, 2023
1. Problem description

In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.

    Equation 1
    shares = CPU * speed

Fixes: apache#6744
2. Proposed changes

To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.

The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.

    Equation 2
    shares = (VM requested shares * cgroup upper limit)/host max shares

To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.

To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.

It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Problem with operating systems that use cgroup v2 related to cpu speed.
8 participants