Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "activateInterface" option to bridge plugin CNI #951

Open
AlonaKaplan opened this issue Oct 2, 2023 · 3 comments
Open

Add "activateInterface" option to bridge plugin CNI #951

AlonaKaplan opened this issue Oct 2, 2023 · 3 comments

Comments

@AlonaKaplan
Copy link

AlonaKaplan commented Oct 2, 2023

Add "activateInterface" option to bridge plugin CNI.
The option will control whether the defined interface will be brought up by the plugin. The default will be true.
This is required in case some other CNI in the plugin chain wants to modify the interface before it is brought up and made visible to the network.

For example, in case I want to prevent my interface to send any IPv6 traffic. I can use a another CNI to achieve this by setting net.ipv6.conf.all.disable_ipv6 to 1.
But since the bridge CNI will first activate the interface, it may send IPv6 data (NDP) on activation, before the next CNI disables it.
Having activateInterface=false will solve this issue. The bridge CNI will create and configure the interface and the next CNI in the chain will apply the sysctl and activate the interface,

@maiqueb
Copy link
Contributor

maiqueb commented Oct 9, 2023

Without going into if the ask makes sense or not, I'd say the flag should be the opposite - i.e. by default have the interface enabled, thus having the existing current behavior as "the" default.

@AlonaKaplan
Copy link
Author

Without going into if the ask makes sense or not, I'd say the flag should be the opposite - i.e. by default have the interface enabled, thus having the existing current behavior as "the" default.

The ticket says the default will be true.

@maiqueb
Copy link
Contributor

maiqueb commented Oct 11, 2023

Without going into if the ask makes sense or not, I'd say the flag should be the opposite - i.e. by default have the interface enabled, thus having the existing current behavior as "the" default.

The ticket says the default will be true.

Oh right !

FWIW, you don't say what's the name of the attribute, and thus: "The option will control whether the defined interface will be brought up by the plugin. The default will be true." can pretty much mean anything - i.e. a disableInterface knob controls whether the defined interface will be brought up by the plugin, and if it defaults to true, you'd get a disabled interface.

I'm happy we share the default should be the "current behavior", but I still think the issue could be a bit more explicit; Near the end of the description you do propose a name for the knob; could you maybe introduce it it the beginning ? IMO that's more explicit / less confusing.

ormergi added a commit to ormergi/kubevirt that referenced this issue Jan 23, 2024
In a scenario where a VM with interface set with bridge binding and explicit MAC
address (e.g.: by human or KubeMacPoll) on a cluster with IPv6 enabled (dual stack
or IPv6 single stack) is migrated, we observe packet drops at the inbound traffic
to the VM immediately after the migration target pod starts.
These packets are getting routed to the destination node before the migrations
completes.

The root cause is when the migration target pod is created an IPv6 "Neighbor
Solicitation" and "Neighbor Advertisement" are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and eventually the traffic is routed to the migration destination before
the migration is completed [1].

Following the bridge CNI RFE to disable the container interface [2], set the container
interface state UP for bridge binding interfaces.

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
ormergi added a commit to ormergi/kubevirt that referenced this issue Jan 23, 2024
In a scenario where a VM with interface set with bridge binding and explicit MAC
address (e.g.: by human or KubeMacPool) on a cluster with IPv6 enabled (dual stack
or IPv6 single stack) is migrated, we observe packet drops at the inbound traffic
to the VM immediately after the migration target pod starts.
These packets are getting routed to the destination node before the migrations
completes.

The root cause is when the migration target pod is created an IPv6 "Neighbor
Solicitation" and "Neighbor Advertisement" are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and eventually the traffic is routed to the migration destination before
the migration is completed [1].

Following the bridge CNI RFE to disable the container interface [2], set the container
interface state UP for bridge binding interfaces.

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
ormergi added a commit to ormergi/kubevirt that referenced this issue Jan 23, 2024
In a scenario where a VM with a bridge binding interface set with MAC address
(e.g.: by human or KubeMacPool) on a cluster with IPv6 enabled (dual stack
or IPv6 single stack) is migrated, we observe packet drops at the inbound traffic
to the VM immediately after the migration target pod starts.
These packets are getting routed to the destination node before the migration completes.

When the migration target pod is created an IPv6 NS(Neighbor Solicitation) and NA
(Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is routed to the migration destination before
the migration is completed [1].

Following the bridge CNI RFE to disable the container interface to prevent the
container interface from sending IPv6 NS/NA [2], set bridge binding interfaces' container interface state to up.

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
ormergi added a commit to ormergi/kubevirt that referenced this issue Jan 23, 2024
In a scenario where a VM with a bridge binding interface set with MAC address
(e.g.: by human or KubeMacPool) on a cluster with IPv6 enabled (dual stack
or IPv6 single stack) is migrated, we observe packet drops at the inbound traffic
to the VM immediately after the migration target pod starts.
These packets are getting routed to the destination node before the migration completes.

When the migration target pod is created an IPv6 NS(Neighbor Solicitation) and NA
(Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is routed to the migration destination before
the migration is completed [1].

Following the bridge CNI RFE to disable the container interface to prevent the
container interface from sending IPv6 NS/NA [2], set bridge binding
interfaces' container interface state to up.

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
ormergi added a commit to ormergi/kubevirt that referenced this issue Jan 23, 2024
In a scenario where a VM with a bridge binding interface set with MAC address
(e.g.: by human or KubeMacPool) on a cluster with IPv6 enabled (dual stack
or IPv6 single stack) is migrated, we observe packet drops at the inbound traffic
to the VM immediately after the migration target pod starts.
These packets are getting routed to the destination node before the migration
completes.

When the migration target pod is created an IPv6 NS (Neighbor Solicitation)
and NA (Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is routed to the migration destination before
the migration is completed [1].

Following the bridge CNI RFE to disable the container interface to prevent the
container interface from sending IPv6 NS/NA [2], set bridge binding
interfaces' container interface state to up.

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
ormergi added a commit to ormergi/kubevirt that referenced this issue Jan 23, 2024
In a scenario where a VM with a bridge binding interface set with MAC address
(e.g.: by human or KubeMacPool) on a cluster with IPv6 enabled (dual stack
or IPv6 single stack) is migrated, we observe packet drops at the inbound traffic
to the VM immediately after the migration target pod starts.
These packets are getting routed to the destination node before the migration
completes.

When the migration target pod is created an IPv6 NS (Neighbor Solicitation)
and NA (Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is routed to the migration destination before
the migration is completed [1].

Following the bridge CNI RFE to disable the container interface to prevent the
container interface from sending IPv6 NS/NA [2], set bridge binding
interfaces' container interface state to up.

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
ormergi added a commit to ormergi/kubevirt that referenced this issue Jan 23, 2024
In a scenario where a VM with a bridge binding interface set with MAC address
(e.g.: by human or KubeMacPool) on a cluster with IPv6 enabled (dual stack
or IPv6 single stack) is migrated, we observe packet drops at the inbound traffic
to the VM immediately after the migration target pod starts.
These packets are getting routed to the destination node before the migration
completes.

When the migration target pod is created an IPv6 NS (Neighbor Solicitation)
and NA (Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is routed to the migration destination before
the migration is completed [1].

Following the bridge CNI RFE to disable the container interface to prevent the
container interface from sending IPv6 NS/NA [2], set bridge binding
interfaces' container interface state to up.

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
ormergi added a commit to ormergi/kubevirt that referenced this issue Jan 29, 2024
Migrating a VM with secondary interfaces that uses bridge binding may cause long
periods of traffic disruption. This occurs when the interface is defined with an
explicit MAC address (manually or automatically through KubeMacPool) on nodes
that have IPv6 enabled.

During the migration, frames may be forwarded to the destination node while the
domain is active on the source and still not running at the destination.

When the migration destination pod is created an IPv6 NS (Neighbor Solicitation)
and NA (Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is forwarded to the migration destination before
the migration is completed [1].

Assuming the used bridge CNI to connect the pod to the node can create the pod
interface in a "link-down" state [2], the IPv6 NS/NA packets are avoided.
However, there is a need to explicitly set the "link-up" when Kubevirt later
processes them.

As part of the pod network configuration calculation, Kubevirt now explicitly
asks to set the relevant interfaces for the bridge binding as "UP".

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
kubevirt-bot pushed a commit to kubevirt-bot/kubevirt that referenced this issue Feb 28, 2024
Migrating a VM with secondary interfaces that uses bridge binding may cause long
periods of traffic disruption. This occurs when the interface is defined with an
explicit MAC address (manually or automatically through KubeMacPool) on nodes
that have IPv6 enabled.

During the migration, frames may be forwarded to the destination node while the
domain is active on the source and still not running at the destination.

When the migration destination pod is created an IPv6 NS (Neighbor Solicitation)
and NA (Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is forwarded to the migration destination before
the migration is completed [1].

Assuming the used bridge CNI to connect the pod to the node can create the pod
interface in a "link-down" state [2], the IPv6 NS/NA packets are avoided.
However, there is a need to explicitly set the "link-up" when Kubevirt later
processes them.

As part of the pod network configuration calculation, Kubevirt now explicitly
asks to set the relevant interfaces for the bridge binding as "UP".

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
kubevirt-bot pushed a commit to kubevirt-bot/kubevirt that referenced this issue Feb 28, 2024
Migrating a VM with secondary interfaces that uses bridge binding may cause long
periods of traffic disruption. This occurs when the interface is defined with an
explicit MAC address (manually or automatically through KubeMacPool) on nodes
that have IPv6 enabled.

During the migration, frames may be forwarded to the destination node while the
domain is active on the source and still not running at the destination.

When the migration destination pod is created an IPv6 NS (Neighbor Solicitation)
and NA (Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is forwarded to the migration destination before
the migration is completed [1].

Assuming the used bridge CNI to connect the pod to the node can create the pod
interface in a "link-down" state [2], the IPv6 NS/NA packets are avoided.
However, there is a need to explicitly set the "link-up" when Kubevirt later
processes them.

As part of the pod network configuration calculation, Kubevirt now explicitly
asks to set the relevant interfaces for the bridge binding as "UP".

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
kubevirt-bot pushed a commit to kubevirt-bot/kubevirt that referenced this issue Feb 28, 2024
Migrating a VM with secondary interfaces that uses bridge binding may cause long
periods of traffic disruption. This occurs when the interface is defined with an
explicit MAC address (manually or automatically through KubeMacPool) on nodes
that have IPv6 enabled.

During the migration, frames may be forwarded to the destination node while the
domain is active on the source and still not running at the destination.

When the migration destination pod is created an IPv6 NS (Neighbor Solicitation)
and NA (Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is forwarded to the migration destination before
the migration is completed [1].

Assuming the used bridge CNI to connect the pod to the node can create the pod
interface in a "link-down" state [2], the IPv6 NS/NA packets are avoided.
However, there is a need to explicitly set the "link-up" when Kubevirt later
processes them.

As part of the pod network configuration calculation, Kubevirt now explicitly
asks to set the relevant interfaces for the bridge binding as "UP".

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
vamsikrishna-siddu pushed a commit to vamsikrishna-siddu/kubevirt that referenced this issue Mar 4, 2024
Migrating a VM with secondary interfaces that uses bridge binding may cause long
periods of traffic disruption. This occurs when the interface is defined with an
explicit MAC address (manually or automatically through KubeMacPool) on nodes
that have IPv6 enabled.

During the migration, frames may be forwarded to the destination node while the
domain is active on the source and still not running at the destination.

When the migration destination pod is created an IPv6 NS (Neighbor Solicitation)
and NA (Neighbor Advertisement) are sent automatically by the kernel.
The switches at the endpoints (e.g.: migration destination node) tables
get updated and the traffic is forwarded to the migration destination before
the migration is completed [1].

Assuming the used bridge CNI to connect the pod to the node can create the pod
interface in a "link-down" state [2], the IPv6 NS/NA packets are avoided.
However, there is a need to explicitly set the "link-up" when Kubevirt later
processes them.

As part of the pod network configuration calculation, Kubevirt now explicitly
asks to set the relevant interfaces for the bridge binding as "UP".

Fixes: https://issues.redhat.com/browse/CNV-28040

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
[2] containernetworking/plugins#951

Signed-off-by: Or Mergi <ormergi@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants