Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MTU in underlay pods should match the physical network's MTU #2837

Closed
patriziobassi opened this issue May 19, 2023 · 4 comments
Closed

MTU in underlay pods should match the physical network's MTU #2837

patriziobassi opened this issue May 19, 2023 · 4 comments

Comments

@patriziobassi
Copy link

Expected Behavior

when using a provider network physical interface with a certain MTU the same (or the same - some headers, such as 100 bytes) should be injected by the kubelet in pods

Actual Behavior

pods have eth0 interface with MTU set to 1500

Steps to Reproduce the Problem

create a underlay subnet backed by ens8 interface, in this case MTU is 9000

ip l dev ens8 |grep mtu
3: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master ovs-system state UP mode DEFAULT group default qlen 1000
  1. create a pod on that subnet and check MTU with
    ip l

it will return 1500. This may affect performance and reliability of transports (big frames may be dropped on pod side while they shouldn't).

Additional Info

  • Kubernetes version: 1.27

  • kube-ovn version: 1.11.5

  • operation-system/kernel version: ubuntu 22.04

@zhangzujian
Copy link
Member

Did you change ens8's MTU after kube-ovn-cni starts up?

@patriziobassi
Copy link
Author

patriziobassi commented May 19, 2023

@zhangzujian no i didn't

Actually the behaviour seems the opposite: in overlay network when deploying a pod i get

42: eth0@if43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UP mode DEFAULT group default
    link/ether 00:00:00:e5:2e:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    alias 73806bd11d52_c

So in overlay mode is seems it gets the correct MTU (where is it from? i guess its inherited from ovn0 interface.

ip l|grep mtu
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
3: ens8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master ovs-system state UP mode DEFAULT group default qlen 1000
4: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
7: mirror0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
8: ovn0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
9: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
10: br-int: <BROADCAST,MULTICAST> mtu 8900 qdisc noop state DOWN mode DEFAULT group default qlen 1000
11: br-provnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
35: 708d6656d939_h@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000
37: cd0d1b1be8a7_h@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000
39: a429ae9fb36b_h@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000
41: ef8639a94caf_h@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000
43: 73806bd11d52_h@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8900 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000

I did not apply the patch from #2834

@zhangzujian
Copy link
Member

In my environment, it works as expected. The node where the pod runs has a label <PROVIDER_NETWORK>.provider-network.kubernetes.io/mtu=<MTU>, what the value is in your environment?

@patriziobassi
Copy link
Author

Hi,

i applied your patch and reinstalled from scratch: now it shows
provnet1.provider-network.kubernetes.io/interface=ens8,provnet1.provider-network.kubernetes.io/mtu=9000,provnet1.provider-network.kubernetes.io/ready=true

and container has 8900 mtu which looks good!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants