Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom concurrency when migrating multiple VMs via XO #6065

Closed
NielsH opened this issue Dec 16, 2021 · 7 comments
Closed

Custom concurrency when migrating multiple VMs via XO #6065

NielsH opened this issue Dec 16, 2021 · 7 comments

Comments

@NielsH
Copy link

NielsH commented Dec 16, 2021

We have a pool with local storage, and when we perform updates we want to perform live migrations of all VMs on a particular host to a different host.

In XOA, we can display all VMs from a host, select them, and migrate them to a different host (within the same pool).
The issue here is that when we migrate multiple VMs at the same time, we notice a longer downtime in the last stage of the migration (I think it's the final pause before it moves over from old host to new host).
It goes a lot better when we migrate only 1 VM at a time.

For this reason, we'd like to set a custom concurrency when migrating multiple VMs.
I.e in this window we get when we select to Migrate:
f55d24b6-2c54-423a-ba2c-f74ed63cb30d-image

An option to set concurrency to 1 would be very much appreciated so we can just select everything and the system will do its thing by itself one VM at a time.
Then a few hours later we can check and reboot the hypervisor when it is empty 🙂

Would something like that be possible?

FWIW I searched for the issue on Github but closed I could find was https://github.com/vatesfr/xen-orchestra/pull/4743/files which was related to backup concurrency, but not live migrations.

@olivierlambert
Copy link
Member

Hi @NielsH

I completely get it for massive grouped VM migration. However, you said that you were using this to "empty" an host. Why not using host evacuate? It's exactly doing that (one by one). I suppose maybe because you don't have a shared SR?

@NielsH
Copy link
Author

NielsH commented Dec 17, 2021

Hi @olivierlambert

Yes, the reason is indeed because we don't have a shared SR. We use local storage, and the host evacuate does not work for this.

@olivierlambert
Copy link
Member

Well, so in that case, your requirement might be more to have an "intelligent" host evacuate (ie done at XO level if basic evacuate fails). Anyway, we'll add this in the backlog :) Ping @marcungeschikts

@NielsH
Copy link
Author

NielsH commented Dec 17, 2021

Yes, that's true. The underlying use case is the Rolling Pool Updates, we'd love to use them, but I understood earlier that this is too complex to implement with local storage (at least for the time being) so whenever there are updates we manually migrate VMs to an empty host (we always keep one empty as spare) and then reboot the hypervisor. Repeat for all hosts and updates done :)

@julien-f
Copy link
Member

julien-f commented Jan 6, 2022

Hi,

We've added a configurable limit to the number of concurrent VMs migration, unfortunately this will not fix your use case, because the Rolling Pool Updates feature is based on host.evacuate which is handled by the XCP-ng/XenServer host itself and not Xen Orchestra.

@NielsH
Copy link
Author

NielsH commented Jan 6, 2022

Awesome, thank you!
No worries, I realise the rolling pool upgrades are more complex to change in this manner.

However, this will already make it a lot more managable for us 👍

@julien-f
Copy link
Member

julien-f commented Sep 9, 2022

Closing, feel free to comment/reopen if necessary.

@julien-f julien-f closed this as completed Sep 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants