Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: suport vm live migration #1001

Merged
merged 1 commit into from
Sep 8, 2021
Merged

Conversation

fanriming
Copy link
Member

@fanriming fanriming commented Sep 1, 2021

What type of this PR

Examples of user facing changes:

  • API changes

How does kube-ovn support Kubevirt vm live migration

  1. The default NIC is only used for live migration, and the VM NIC is attached through Multus-cni.
  2. If you need to assign fixed IP addresses to VM, use <attach>.<ns>.ovn.kubernetes.io/allow_live_migration: 'true' annotation to avoid IP conflict errors. You should also set the subnet parameter disableGatewayCheck: true.
  3. If you use kubevirt DHCP configuration vm network, need to solve the problem of the default route, we can use <attach >.<ns>ovn.kubernetes.io/default_route: 'true' annotation to select the default routing NIC.

@SkalaNetworks
Copy link
Contributor

SkalaNetworks commented Oct 3, 2024

I'm copying my message from Slack here so that everybody can see it because it's probably a question other people might have thought about:

I'm trying to get more information on how live migration is implemented in Kube-OVN for Kubevirt VMs.
@Mengxin Liu
I believe you implemented most of the features related to it?My question is about the need for multiple interfaces on the virt-handler pods. One for migrations with the default VPC/Subnet and another with our Multus network (can be one connected to a NAT gw for example).This is related to PR #1001
It says:
How does kube-ovn support Kubevirt vm live migration

The default NIC is only used for live migration, and the VM NIC is attached through Multus-cni.
If you need to assign fixed IP addresses to VM, use <attach>.<ns>.[ovn.kubernetes.io/allow_live_migration](http://ovn.kubernetes.io/allow_live_migration): 'true' annotation to avoid IP conflict errors. You should also set the subnet parameter disableGatewayCheck: true.
If you use kubevirt DHCP configuration vm network, need to solve the problem of the default route, we can use <attach >.<ns>[ovn.kubernetes.io/default_route](http://ovn.kubernetes.io/default_route): 'true' annotation to select the default routing NIC.

I don't get all of that, I did this configuration and it does work. But I also created a Pod with only one multus interface managed by Kube-OVN (which is now my default NIC) and it works just as well and is less complicated. So what's the point of the first network? How is it "used for live migrations" ?Concerning the second point, I get allow_live_migration ensures the IP stays static during migrations. I don't get why disableGatewayCheck needs to be set to true, it works very well with it on false.I'm asking this question because I want to use Cilium CNI chaining to handle network policies on my multus interface (which is connected to a NAT gateway), but Cilium currently doesn't handle that very well if it's not the primary interface (basically, it will start monitoring eth0 anyway). I get that to work if there's only one interface on my VM, but not if there's 2, because eth0 is binded to ovn-cluster and not my multus interface. But am I missing something? Why is my setup working, is it less reliable because it misses that interface ?

@fanriming if you get to see this. I would like to understand the tight integration between Kube-OVN and Kubevirt to better document it. There's currently nearly no documentation about it, and setting up livemigration is a must.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants