Virtual network interface (Layer 3)
IP packets (Layer 3)
Used for routing packets between user space and kernel space.
VPNs (like OpenVPN or WireGuard) use TUN interfaces to route IP traffic from user applications through encrypted tunnels.
A TUN device appears like a network interface.
When an app writes to the TUN device, the data goes into the kernel as if it came from a real network card.
When the kernel writes to TUN, the data goes back to the user-space app.
Virtual network interface (Layer 2)
Ethernet frames (Layer 2)
Used for virtual Ethernet bridging.
Virtual machines or containers that need full Ethernet access use TAP devices.
Feature TUN TAP
Layer 3 (IP) 2 (Ethernet) Data IP packets Ethernet frames Use Routing (VPNs) Bridging (VMs, containers)
Open vSwitch (OVS)
Virtual switch
Connects multiple virtual interfaces (like TAPs, containers, or VMs) together, allowing advanced network control (bridging, VLANs, tunneling).
Data centers, Kubernetes (as part of OVN, SDN setups), cloud environments.
Supports VXLAN, GRE, Geneve tunnels.
Has flow rules for routing/filtering traffic (like a programmable switch).
Can operate in kernel or user space for performance.
Open vSwitch (OVS)
Virtual switch
Connects multiple virtual interfaces (like TAPs, containers, or VMs) together, allowing advanced network control (bridging, VLANs, tunneling).
Data centers, Kubernetes (as part of OVN, SDN setups), cloud environments.
Supports VXLAN, GRE, Geneve tunnels.
Has flow rules for routing/filtering traffic (like a programmable switch).
Can operate in kernel or user space for performance.
CNI: A standard interface for container networking plugins in Kubernetes.
Defines how containers should get IPs, connect to other containers, etc.
One of the CNI plugins developed by CoreOS.
It provides an overlay network for Kubernetes pods.
It can use VXLAN or other backends (like host-gw, UDP).
Each Kubernetes node gets a subnet (like 10.244.1.0/24).
Flannel uses VXLAN tunnels to connect these subnets across nodes.
So pod traffic can flow between nodes over an overlay network. 6. Zero Copy
Meaning: Avoiding unnecessary copying of data between user space and kernel space.
Goal: Improve performance by reducing CPU and memory usage.
Used in: High-performance networking, file transfers, databases.
Example: Normally, when an app sends data, it’s copied:
user buffer → kernel buffer → network card
With zero-copy techniques (e.g., sendfile()), data moves directly from disk to NIC:
disk → network card
No copying to user-space → faster and more efficient.
Avoiding unnecessary copying of data between user space and kernel space.
Improve performance by reducing CPU and memory usage.
High-performance networking, file transfers, databases.
Normally, when an app sends data, it’s copied:
user buffer → kernel buffer → network card
With zero-copy techniques (e.g., sendfile()), data moves directly from disk to NIC:
disk → network card
No copying to user-space → faster and more efficient.
Where normal apps run (like browsers, Docker daemon).
Core part of OS (drivers, scheduler, networking stack).
Every time data moves from user → kernel or back, it takes time (context switch + copy).
Networking systems (like DPDK, eBPF, or zero-copy sockets) try to minimize these transitions to speed up communication.
A virtual network built on top of another physical network.
To connect distributed systems (like containers or VMs) seamlessly across hosts.
VXLAN, GRE, Geneve.
Pods on Node A and Node B have private IPs, but they communicate using VXLAN (an overlay) over the physical network. To them, it feels like they’re on the same LAN.
Concept Role in System
TUN/TAP Virtual interfaces that move traffic between user and kernel space. OVS (OBS) Virtual switch connecting VMs or containers and handling advanced