Skip to content

Commit

Permalink
optimize docs due to frequently asked question. (#1393)
Browse files Browse the repository at this point in the history
  • Loading branch information
oilbeater committed Mar 23, 2022
1 parent 7bd25c6 commit a33d519
Show file tree
Hide file tree
Showing 9 changed files with 44 additions and 25 deletions.
16 changes: 5 additions & 11 deletions docs/cluster-interconnection.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ From v1.4.0, two or more Kubernetes clusters can be connected with each other. P
communicate directly using Pod IP. Kub-OVN uses tunnel to encapsulate traffic between clusters gateways,
only L3 connectivity for gateway nodes is required.

> The multi-cluster networking only supports overlay type network, multi-cluster networking for vlan type network should be implemented by underlay infrastructure.
## Prerequisite
* To use route auto advertise, subnet CIDRs in different clusters *MUST NOT* be overlapped with each other,including ovn-default and join subnets CIDRs. Otherwise, you should disable the auto route and add routes manually.
* The Interconnection Controller *SHOULD* be deployed in a region that every cluster can access by IP.
Expand All @@ -15,7 +17,7 @@ only L3 connectivity for gateway nodes is required.
```bash
docker run --name=ovn-ic-db -d --network=host -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn kubeovn/kube-ovn:v1.9.0 bash start-ic-db.sh
```
If `containerd` replaces `docker` then the command is as follows:
If `containerd` replaces `docker` then the command is as follows:

```shell
ctr -n k8s.io run -d --net-host --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw" docker.io/kubeovn/kube-ovn:v1.9.0 ovn-ic-db bash start-ic-db.sh
Expand Down Expand Up @@ -163,7 +165,7 @@ In az2
docker run --name=ovn-ic-db -d --network=host -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn -e LOCAL_IP="LEADERIP" -e NODE_IPS="IP1,IP2,IP3" kubeovn/kube-ovn:v1.9.0 bash start-ic-db.sh
```

If `containerd` replaces `docker` then the command is as follows:
If `containerd` replaces `docker` then the command is as follows:

```shell
ctr -n k8s.io run -d --net-host --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw" --env="NODE_IPS="IP1,IP2,IP3"" --env="LOCAL_IP="LEADERIP"" docker.io/kubeovn/kube-ovn:v1.9.0 ovn-ic-db bash start-ic-db.sh
Expand All @@ -175,7 +177,7 @@ ctr -n k8s.io run -d --net-host --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,op
docker run --name=ovn-ic-db -d --network=host -v /etc/ovn/:/etc/ovn -v /var/run/ovn:/var/run/ovn -v /var/log/ovn:/var/log/ovn -e LOCAL_IP="LOCALIP" -e NODE_IPS="IP1,IP2,IP3" -e LEADER_IP="LEADERIP" kubeovn/kube-ovn:v1.9.0 bash start-ic-db.sh
```

If `containerd` replaces `docker` then the command is as follows:
If `containerd` replaces `docker` then the command is as follows:

```shell
ctr -n k8s.io run -d --net-host --mount="type=bind,src=/etc/ovn/,dst=/etc/ovn,options=rbind:rw" --mount="type=bind,src=/var/run/ovn,dst=/var/run/ovn,options=rbind:rw" --mount="type=bind,src=/var/log/ovn,dst=/var/log/ovn,options=rbind:rw" --env="NODE_IPS="IP1,IP2,IP3"" --env="LOCAL_IP="LEADERIP"" --env="NODE_IPS="IP1,IP2,IP3"" docker.io/kubeovn/kube-ovn:v1.9.0 ovn-ic-db bash start-ic-db.sh
Expand All @@ -198,11 +200,3 @@ data:
gw-nodes: "az1-gw" # The node name which acts as the interconnection gateway
auto-route: "false" # Auto announce route to all clusters. If set false, you can select announced routes later manually
```



## Gateway High Available

Kube-OVN now supports Active-Backup mode gateway HA. You can add more nodes name in the configmap separated by commas.

Active-Active mode gateway HA is under development.
2 changes: 2 additions & 0 deletions docs/dpdk-hybrid.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

This document describes how to run Kube-OVN with nodes which run ovs-dpdk or ovs-kernel

> Upstream KubeVirt has not officially support OVS-DPDK, you should try [downstream path with OVS-DPDK support](https://github.com/kubevirt/kubevirt/pull/3208) or [KVM device plugin](https://github.com/kubevirt/kubernetes-device-plugins/blob/master/docs/README.kvm.md) to use this function.
## Prerequisite
* Node which runs ovs-dpdk must have a net card bound to the dpdk driver.
* Hugepages on the host.
Expand Down
22 changes: 18 additions & 4 deletions docs/install.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Installation

Kube-OVN includes two parts:
- Native OVS and OVN components
- OVS and OVN components
- Controller and CNI plugins that integrate OVN with Kubernetes

## Prerequisite
Expand All @@ -10,13 +10,23 @@ Kube-OVN includes two parts:
- OS: CentOS 7/8, Ubuntu 16.04/18.04
- Other Linux distributions with geneve, openvswitch and ip_tables module installed. You can use commands `modinfo geneve`, `modinfo openvswitch` and `modinfo ip_tables` to verify
- Kernel boot with `ipv6.disable=0`
- Kube-proxy *MUST* be ready so that Kube-OVN can connect to apiserver
- Kube-proxy *MUST* be ready so that Kube-OVN can connect to apiserver by service address

*NOTE*
1. Users using Ubuntu 16.04 should build the OVS kernel module and replace the built-in one to avoid kernel NAT issues.
2. CentOS users should make sure kernel version is greater than 3.10.0-898 to avoid a kernel conntrack bug, see [here](https://bugs.launchpad.net/neutron/+bug/1776778).
3. Kernel must boot with IPv6 enabled, otherwise geneve tunnel will not be established due to a kernel bug, see [here](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1794232).

*Ports that Kube-OVN uses:*

| Component | Port | Usage |
|---------------------|-----------------------------------------------|------------------------|
| ovn-central | 6641/tcp, 6642/tcp, 6643/tcp, 6644/tcp | ovn-db and raft server |
| ovs-ovn | Geneve 6081/udp, STT 7471/tcp, Vxlan 4789/udp | Tunnel port |
| kube-ovn-controller | 10660/tcp | Metrics |
| kube-ovn-daemon | 10665/tcp | Metrics |
| kube-ovn-monitor | 10661/tcp | Metrics |

## To Install

### One Script Installer
Expand All @@ -34,14 +44,18 @@ If you want to try the latest developing Kube-OVN, try the script below:
2. Use vim to edit the script variables to meet your requirement:
```bash
REGISTRY="kubeovn"
POD_CIDR="10.16.0.0/16" # Default subnet CIDR, Do NOT overlap with NODE/SVC/JOIN CIDR
POD_CIDR="10.16.0.0/16" # Pod default subnet CIDR, Do NOT overlap with NODE/SVC/JOIN CIDR
SVC_CIDR="10.96.0.0/12" # Should be equal with service-cluster-ip-range CIDR range which is configured for the API server
JOIN_CIDR="100.64.0.0/16" # Subnet CIDR used for connectivity between nodes and Pods, Do NOT overlap with NODE/POD/SVC CIDR
LABEL="node-role.kubernetes.io/master" # The node label to deploy OVN DB
IFACE="" # The nic to support container network can be a nic name or a group of regex separated by comma e.g. `IFACE=enp6s0f0,eth.*`, if empty will use the nic that the default route use
VERSION="v1.9.0"
VERSION="v1.9.1"
```

> Note:
> 1. `SVC_CIDR` here is just to tell Kube-OVN the Service CIDR in this cluster to configure related rules, Kube-OVN will *NOT* set the cluster Service CIDR
> 2. If the desired nic names are different across nodes and can not be easily expressed by regex, you can add node annotation `ovn.kubernetes.io/tunnel_interface=xxx` to exact math the interface name
This basic setup works for default overlay network. If you are using default underlay/vlan network, please refer [Vlan/Underlay Support](vlan.md).

3. Init kubeadm and Run the script
Expand Down
10 changes: 6 additions & 4 deletions docs/qos.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,17 @@ Kube-OVN supports dynamically configurations of Ingress and Egress traffic rate

Before v1.9.0, Kube-OVN can use annotation ` ovn.kubernetes.io/ingress_rate` and ` ovn.kubernetes.io/egress_rate` to specify the bandwidth of pod. The unit is 'Mbit/s'. We can set QoS when creating a pod or dynamically set QoS by changing annotations for pod.

Since v1.9.0, Kube-OVN starts to support linux-htb and linux-netem QoS settings. The detailed description for the Qos can be found at [Qos](https://man7.org/linux/man-pages/man5/ovs-vswitchd.conf.db.5.html#QoS_TABLE)
Since v1.9.0, Kube-OVN starts to support linux-htb and linux-netem QoS settings. The detailed description for the QoS can be found at [QoS](https://man7.org/linux/man-pages/man5/ovs-vswitchd.conf.db.5.html#QoS_TABLE)

> The QoS function supports both overlay and vlan mode network.
## Previous Pod QoS Setting
Use the following annotations to specify QoS:
- `ovn.kubernetes.io/ingress_rate`: Rate limit for Ingress traffic, unit: Mbit/s
- `ovn.kubernetes.io/egress_rate`: Rate limit for Egress traffic, unit: Mbit/s

## linux-htb Qos
A CRD resource is added to set QoS priority for linux-htb qos.
## linux-htb QoS
A CRD resource is added to set QoS priority for linux-htb QoS.
CRD is defined as follows:

```
Expand Down Expand Up @@ -116,7 +118,7 @@ You can also use this annotation to control the traffic from each node to extern
through these annotations.

# Test
## Qos Priority Case
## QoS Priority Case
When the parameter `subnet.Spec.HtbQos` is specified for subnet, such as `htbqos: htbqos-high`, and the annotation `ovn.kubernetes.io/priority` is specified for pod, such as `ovn.kubernetes.io/priority: "50"`, the actual priority settings are as follows

```
Expand Down
2 changes: 1 addition & 1 deletion docs/snat-and-eip.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ By using eip, external services can visit a pod with a stable ip and pod will vi

## Prerequisite
* To take use of OVN L3 Gateway, a dedicated nic *MUST* be bridged into ovs to act as the gateway between overlay and underlay, ops should use other nics to manage the host server.
* As the nic will emit packets with nat ip directly into underlay network, administrators *MUST* make sure that theses packets will not be denied by security rules.
* As the nic will emit packets with nat ip directly into underlay network, administrators *MUST* make sure that these packets will not be denied by security rules.
* SNAT and EIP functions *CANNOT* work together with Cluster interconnection network

## Steps
Expand Down
3 changes: 3 additions & 0 deletions docs/static-ip.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@

Kube-OVN supports allocation a static IP address for a single Pod, or a static IP pool for a Workload with multiple Pods (Deployment/DaemonSet/StatefulSet). To enable this feature, add the following annotations to the Pod spec template.

> This doc mainly focuses on Kube-OVN as a standalone CNI plugin to configure static IP.
> If work with multus-cni to configure static ip, please refer to [IPAM for Multi Network Interface](./multi-nic.md)
## For a single Pod

Use the following annotations to specify the address
Expand Down
2 changes: 2 additions & 0 deletions docs/subnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,8 @@ Since kube-ovn v1.8.0, kube-ovn support using designative egress ip on node, the

## DHCP Options

> This function mainly works with KubeVirt SR-IOV or OVS-DPDK type network, where the embedded dhcp in KubeVirt can not work.
OVN implements native DHCPv4 and DHCPv6 support which provides stateless replies to DHCPv4 and DHCPv6 requests.

Now kube-ovn support [DHCP feature](https://github.com/kubeovn/kube-ovn/pull/1320) too, you can enable it in the spec of subnet. It will create DHCPv4 options or DHCPv6 options, and patch the UUIDs into the status of subnet.
Expand Down
2 changes: 1 addition & 1 deletion docs/vlan.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ From v1.7.1 on, Kube-OVN supports dynamic underlay/VLAN networking management.
In the Vlan/Underlay mode, OVS sends origin Pods packets directly to the physical network and uses physical switch/router to transmit the traffic, so it relies on the capabilities of network infrastructure.

1. For K8s running on VMs provided by OpenStack, `PortSecurity` of the network ports MUST be `disabled`;
2. For K8s running on VMs provided by VMware, the switch security options `MAC Address Changes`, `Forged Transmits` and `Promiscuous Mode Operation` MUST be `allowed`;
2. For K8s running on VMs provided by VMware, the switch security options `MAC Address Changes` and `Forged Transmits` MUST be `allowed`;
3. The Vlan/Underlay mode can not run on public IaaS providers like AWS/GCE/Alibaba Cloud as their network can not provide the capability to transmit this type packets;
4. In versions prior to v1.9.0, Kube-OVN checks the connectivity to the subnet gateway through ICMP, so the gateway MUST respond the ICMP messages if you are using those versions, or you can turn off the check by setting `disableGatewayCheck` to `true` which is introduced in v1.8.0;
5. For in-cluster service traffic, Pods set the dst mac to gateway mac and then Kube-OVN applies DNAT to transfer the dst ip, the packets will first be sent to the gateway, so the gateway MUST be capable of transmitting the packets back to the subnet.
Expand Down
10 changes: 6 additions & 4 deletions docs/vpc.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ metadata:
spec:
vpc: test-vpc-1 # Specifies which VPC the gateway belongs to
subnet: sn # Subnet in VPC
lanIp: 10.0.1.254 # Internal IP for nat gateway pod, IP should be within the range of the subnet
lanIp: 10.0.1.254 # IP should be within the range of the subnet sn, this is the internal IP for nat gateway pod
eips: # Underlay IPs assigned to the gateway
- eipCIDR: 192.168.0.111/24
gateway: 192.168.0.254
Expand Down Expand Up @@ -246,7 +246,9 @@ ip route add <SVC_IP> via <VPC_LB_IP>

Replace `<VPC_LB_IP>` with the VPC LB Pod's IP address in subnet `ovn-vpc-lb`.

## Custom VPC limitation

## Custom VPC limitation and FAQ
- Custom VPC can not access host network
- Not support DNS/Service/Loadbalancer
- TCP/HTTP probes cannot work, as the host can not access Pods in custom VPCs
- DNS is not supported
- The vpc-nat-gateway use macvlan to implement external network and as the limitation of macvlan, host can not access the vpc-nat-gateway address and can not use tcpdump to capture the traffic
- The routes in vpc-nat-gateway can not show by `ip route`, you should use `ip rule` and `ip route show table 100` to see the detail routes

0 comments on commit a33d519

Please sign in to comment.