Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add numa to dedicated #189

Merged
merged 2 commits into from Mar 29, 2022
Merged

Conversation

ljelinkova
Copy link
Collaborator

@ljelinkova ljelinkova commented Mar 25, 2022

When using dedicated cpu policy together with numa, the numa pinning should be created when the vm starts and
cleared when the vm shuts down.

@ljelinkova
Copy link
Collaborator Author

I implemented the pinning so that it is stored into the database, but I wonder, if it would not be better to create a "runtime" pinning, similar to the "runtime" cpu pinning as mentioned here #180

There are 2 reasons:

  • the current implementation allows one virtual node to be pinned to more physical nodes. However, the UI cannot display that
  • with addition of the validation, that the numa nodes cannot be manually pinned together with the dedicated policy, it will be impossible to edit a running VM because the running VM's nodes will be pinned when using dedicated policy

@liranr23
Copy link
Member

I implemented the pinning so that it is stored into the database, but I wonder, if it would not be better to create a "runtime" pinning, similar to the "runtime" cpu pinning as mentioned here #180

There are 2 reasons:

  • the current implementation allows one virtual node to be pinned to more physical nodes. However, the UI cannot display that

I never did that, I am unsure if we wish that at all. I am unaware of the pros and cons of this. Out of curiosity does it pass NumaValidator::checkVmNumaNodesIntegrity()?

  • with addition of the validation, that the numa nodes cannot be manually pinned together with the dedicated policy, it will be impossible to edit a running VM because the running VM's nodes will be pinned when using dedicated policy

Here we probably need to loose the validations to let it change somehow (VM status?).

A general answer about making the NUMA configuration dynamic:
We thought about it already when implementing resize and pin, this is really costy as NUMA configuration is built upon multiple DB tables and many fields, it's not a single field addition into a dynamic table.
However it is probably possible to be done and then serialize/deserialize it.
Since we know the vNUMA exists, we don't add/remove any new and we do know it's not pinned anywhere - we do the pinning and unpinning ourselves - I don't really see the point of not using the static data.
But, since we know the initial state and leaving it back to the same state I think it's OK to use it as is.

@ahadas
Copy link
Member

ahadas commented Mar 27, 2022

I implemented the pinning so that it is stored into the database, but I wonder, if it would not be better to create a "runtime" pinning, similar to the "runtime" cpu pinning as mentioned here #180
There are 2 reasons:

  • the current implementation allows one virtual node to be pinned to more physical nodes. However, the UI cannot display that

I think it's not a justified reason to change it as libvirt seems to accept such a configuration

I never did that, I am unsure if we wish that at all. I am unaware of the pros and cons of this. Out of curiosity does it pass NumaValidator::checkVmNumaNodesIntegrity()?

  • with addition of the validation, that the numa nodes cannot be manually pinned together with the dedicated policy, it will be impossible to edit a running VM because the running VM's nodes will be pinned when using dedicated policy

Here we probably need to loose the validations to let it change somehow (VM status?).

That's one option - to say that when a vm is not down and it has dedicated policy then the numa pinning represent generated rules, otherwise those rules were specified manually if exists
But I like the concept of separating out the runtime NUMA pinning because we won't find ourselves operating on the table of the vnodes in all those places related to vm-lifecycle

A general answer about making the NUMA configuration dynamic: We thought about it already when implementing resize and pin, this is really costy as NUMA configuration is built upon multiple DB tables and many fields, it's not a single field addition into a dynamic table. However it is probably possible to be done and then serialize/deserialize it. Since we know the vNUMA exists, we don't add/remove any new and we do know it's not pinned anywhere - we do the pinning and unpinning ourselves - I don't really see the point of not using the static data. But, since we know the initial state and leaving it back to the same state I think it's OK to use it as is.

Back then we thought of serializing the entire structure we have in the database which was a more complicated scenario that this, IIUC what Lucia has in mind - representing just the relation between the existing virtual and physical resources that already exist in the database (and let's say assuming that the tune is STRICT in that case). I guess it could have the same syntax of the cpu pinning field, no?

I'm just afraid we do this a bit too late - we also need to change the pinning logic to of the dedicated CPUs with vNUMA topology, right?

@ljelinkova
Copy link
Collaborator Author

/ost

The numa_cell_id is reported by vds as a part of cpuTopology,
this patch adds it also to the VdsCpuUnit.
When using dedicated cpu policy together with numa,
the numa pinning should be created when the vm starts and
cleared when the vm shuts down.
@ljelinkova
Copy link
Collaborator Author

/ost

@ahadas ahadas merged commit ff4fd22 into oVirt:master Mar 29, 2022
@ljelinkova ljelinkova deleted the add-numa-to-dedicated branch April 1, 2022 08:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants