Skip to content

Commit

Permalink
docs: optimize description
Browse files Browse the repository at this point in the history
  • Loading branch information
oilbeater committed Aug 17, 2021
1 parent 8a40528 commit f03d435
Show file tree
Hide file tree
Showing 14 changed files with 232 additions and 252 deletions.
23 changes: 11 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,23 +13,26 @@
Kube-OVN, a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/), integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises with the most functions and the easiest operation.

## Community
The Kube-OVN community is waiting for you participation!
The Kube-OVN community is waiting for your participation!
- Follow us at [Twitter](https://twitter.com/KubeOvn)
- Chat with us at [Slack](https://kube-ovn-slackin.herokuapp.com/)
- Other issues please send email to [mengxin@alauda.io](mailto:mengxin@alauda.io)
- 微信用户扫码加小助手进入社区交流群,请注明 Kube-OVN


![Image of wechat](./docs/wechat.png)

## Features
- **Namespaced Subnets**: Each Namespace can have a unique Subnet (backed by a Logical Switch). Pods within the Namespace will have IP addresses allocated from the Subnet. It's also possible for multiple Namespaces to share a Subnet.
- **Vlan/Underlay Support**: In addition to overlay network, Kube-OVN also supports underlay and vlan mode network for better performance and direct connectivity with physical network.
- **VPC Support**: Multi-tenant network with independent address spaces, where each tenant has its own network infrastructure such as eips, nat gateways, security groups and loadbalancers.
- **Static IP Addresses for Workloads**: Allocate random or static IP addresses to workloads.
- **Multi-Cluster Network**: Connect different Kubernetes/Openstack clusters into one L3 network.
- **TroubleShooting Tools**: Handy tools to diagnose, trace, monitor and dump container network traffic to help troubleshoot complicate network issues.
- **Prometheus & Grafana Integration**: Exposing network quality metrics like pod/node/service/dns connectivity/latency in Prometheus format.
- **ARM Support**: Kube-OVN can run on x86_64 and arm64 platforms.
- **Subnet Isolation**: Can configure a Subnet to deny any traffic from source IP addresses not within the same Subnet. Can whitelist specific IP addresses and IP ranges.
- **Network Policy**: Implementing networking.k8s.io/NetworkPolicy API by high performance ovn ACL.
- **Static IP Addresses for Workloads**: Allocate random or static IP addresses to workloads.
- **DualStack IP Support**: Pod can run in IPv4-Only/IPv6-Only/DualStack mode.
- **Pod NAT and EIP**: Manage the pod external traffic and external ip like tradition VM.
- **Multi-Cluster Network**: Connect different clusters into one L3 network.
- **IPAM for Multi NIC**: A cluster-wide IPAM for CNI plugins other than Kube-OVN, such as macvlan/vlan/host-device to take advantage of subnet and static ip allocation functions in Kube-OVN.
- **Dynamic QoS**: Configure Pod/Gateway Ingress/Egress traffic rate limits on the fly.
- **Embedded Load Balancers**: Replace kube-proxy with the OVN embedded high performance distributed L2 Load Balancer.
Expand All @@ -39,17 +42,13 @@ The Kube-OVN community is waiting for you participation!
- **BGP Support**: Pod/Subnet IP can be exposed to external by BGP router protocol.
- **Traffic Mirror**: Duplicated container network traffic for monitoring, diagnosing and replay.
- **Hardware Offload**: Boost network performance and save CPU resource by offloading OVS flow table to hardware.
- **Vlan/Underlay Support**: Kube-OVN also support underlay and Vlan mode network for better performance and direct connectivity with physic network.
- **DPDK Support**: DPDK application now can run in Pod with OVS-DPDK.
- **ARM Support**: Kube-OVN can run on x86_64 and arm64 platforms.
- **VPC Support**: Multi-tenant network with overlapped address spaces.
- **TroubleShooting Tools**: Handy tools to diagnose, trace, monitor and dump container network traffic to help troubleshooting complicate network issues.
- **Prometheus & Grafana Integration**: Exposing network quality metrics like pod/node/service/dns connectivity/latency in Prometheus format.

## Planned Future Work
- Policy-based QoS
- More Metrics and Traffic Graph
- More Diagnosis and Tracing Tools
- High performance kernel datapath
- Namespaced VPC
- Kubevirt/Kata optimization

## Network Topology

Expand Down
8 changes: 4 additions & 4 deletions dist/images/install-pre-1.16.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2267,10 +2267,10 @@ showHelp(){
echo " [nb|sb] [status|kick|backup] ovn-db operations show cluster status, kick stale server or backup database"
echo " nbctl [ovn-nbctl options ...] invoke ovn-nbctl"
echo " sbctl [ovn-sbctl options ...] invoke ovn-sbctl"
echo " vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on selected node"
echo " ofctl {nodeName} [ovs-ofctl options ...] invoke ovs-ofctl on selected node"
echo " dpctl {nodeName} [ovs-dpctl options ...] invoke ovs-dpctl on selected node"
echo " appctl {nodeName} [ovs-appctl options ...] invoke ovs-appctl on selected node"
echo " vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on the specified node"
echo " ofctl {nodeName} [ovs-ofctl options ...] invoke ovs-ofctl on the specified node"
echo " dpctl {nodeName} [ovs-dpctl options ...] invoke ovs-dpctl on the specified node"
echo " appctl {nodeName} [ovs-appctl options ...] invoke ovs-appctl on the specified node"
echo " tcpdump {namespace/podname} [tcpdump options ...] capture pod traffic"
echo " trace {namespace/podname} {target ip address} {icmp|tcp|udp} [target tcp or udp port] trace ovn microflow of specific packet"
echo " diagnose {all|node} [nodename] diagnose connectivity of all nodes or a specific node"
Expand Down
8 changes: 4 additions & 4 deletions dist/images/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2311,10 +2311,10 @@ showHelp(){
echo " [nb|sb] [status|kick|backup] ovn-db operations show cluster status, kick stale server or backup database"
echo " nbctl [ovn-nbctl options ...] invoke ovn-nbctl"
echo " sbctl [ovn-sbctl options ...] invoke ovn-sbctl"
echo " vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on selected node"
echo " ofctl {nodeName} [ovs-ofctl options ...] invoke ovs-ofctl on selected node"
echo " dpctl {nodeName} [ovs-dpctl options ...] invoke ovs-dpctl on selected node"
echo " appctl {nodeName} [ovs-appctl options ...] invoke ovs-appctl on selected node"
echo " vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on the specified node"
echo " ofctl {nodeName} [ovs-ofctl options ...] invoke ovs-ofctl on the specified node"
echo " dpctl {nodeName} [ovs-dpctl options ...] invoke ovs-dpctl on the specified node"
echo " appctl {nodeName} [ovs-appctl options ...] invoke ovs-appctl on the specified node"
echo " tcpdump {namespace/podname} [tcpdump options ...] capture pod traffic"
echo " trace {namespace/podname} {target ip address} {icmp|tcp|udp} [target tcp or udp port] trace ovn microflow of specific packet"
echo " diagnose {all|node} [nodename] diagnose connectivity of all nodes or a specific node"
Expand Down
8 changes: 4 additions & 4 deletions dist/images/kubectl-ko
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ showHelp(){
echo " [nb|sb] [status|kick|backup] ovn-db operations show cluster status, kick stale server or backup database"
echo " nbctl [ovn-nbctl options ...] invoke ovn-nbctl"
echo " sbctl [ovn-sbctl options ...] invoke ovn-sbctl"
echo " vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on selected node"
echo " ofctl {nodeName} [ovs-ofctl options ...] invoke ovs-ofctl on selected node"
echo " dpctl {nodeName} [ovs-dpctl options ...] invoke ovs-dpctl on selected node"
echo " appctl {nodeName} [ovs-appctl options ...] invoke ovs-appctl on selected node"
echo " vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on the specified node"
echo " ofctl {nodeName} [ovs-ofctl options ...] invoke ovs-ofctl on the specified node"
echo " dpctl {nodeName} [ovs-dpctl options ...] invoke ovs-dpctl on the specified node"
echo " appctl {nodeName} [ovs-appctl options ...] invoke ovs-appctl on the specified node"
echo " tcpdump {namespace/podname} [tcpdump options ...] capture pod traffic"
echo " trace {namespace/podname} {target ip address} {icmp|tcp|udp} [target tcp or udp port] trace ovn microflow of specific packet"
echo " diagnose {all|node} [nodename] diagnose connectivity of all nodes or a specific node"
Expand Down
2 changes: 1 addition & 1 deletion docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

##### Prerequisites:

1. Kube-OVN is developed by [Go](https://golang.org/) 1.15 and uses [Go Modules](https://github.com/golang/go/wiki/Modules) to manage dependency. Make sure `GO111MODULE="on"`.
1. Kube-OVN is developed by [Go](https://golang.org/) 1.16 and uses [Go Modules](https://github.com/golang/go/wiki/Modules) to manage dependency. Make sure `GO111MODULE="on"`.

2. We also use [gosec](https://github.com/securego/gosec) to inspects source code for security problems.

Expand Down
29 changes: 15 additions & 14 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,15 @@ Kube-OVN includes two parts:
- Controller and CNI plugins that integrate OVN with Kubernetes

## Prerequest
- Kubernetes >= 1.11
- Kubernetes >= 1.11, version 1.16 and later is recommended
- Docker >= 1.12.6
- OS: CentOS 7.5/7.6/7.7, Ubuntu 16.04/18.04
- OS: CentOS 7/8, Ubuntu 16.04/18.04
- Other Linux distributions with geneve and openvswitch module installed. You can use commands `modinfo geneve` and `modinfo openvswitch` to verify
- Kernel boot with `ipv6.disable=0`
- Kube-proxy to provide service discovery for kube-ovn to connect to apiserver
- Kube-proxy *MUST* be ready so that Kube-OVN can connect to apiserver

*NOTE*
1. Ubuntu 16.04 users should build the related ovs-2.11.1 kernel module to replace the kernel built-in module
1. Users using Ubuntu 16.04 should build the OVS kernel module and replace the built-in one to avoid kernel NAT issues.
2. CentOS users should make sure kernel version is greater than 3.10.0-898 to avoid a kernel conntrack bug, see [here](https://bugs.launchpad.net/neutron/+bug/1776778)
3. Kernel must boot with ipv6 enabled, otherwise geneve tunnel will not be established due to a kernel bug, see [here](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1794232)

Expand All @@ -36,23 +37,25 @@ If you want to try the latest developing Kube-OVN, try the script below
2. Use vim to edit the script variables to meet your requirement
```bash
REGISTRY="kubeovn"
POD_CIDR="10.16.0.0/16" # Do NOT overlap with NODE/SVC/JOIN CIDR
SVC_CIDR="10.96.0.0/12" # Do NOT overlap with NODE/POD/JOIN CIDR
JOIN_CIDR="100.64.0.0/16" # Do NOT overlap with NODE/POD/SVC CIDR
POD_CIDR="10.16.0.0/16" # Default subnet CIDR, Do NOT overlap with NODE/SVC/JOIN CIDR
SVC_CIDR="10.96.0.0/12" # Should be equal with service-cluster-ip-range CIDR range which is configured for the API server
JOIN_CIDR="100.64.0.0/16" # Subnet CIDR used for connectivity between nodes and Pods, Do NOT overlap with NODE/POD/SVC CIDR
LABEL="node-role.kubernetes.io/master" # The node label to deploy OVN DB
IFACE="" # The nic to support container network can be a nic name or a group of regex separated by comma, if empty will use the nic that the default route use
VERSION="v1.7.0"
IFACE="" # The nic to support container network can be a nic name or a group of regex separated by comma e.g. `IFACE=enp6s0f0,eth.*`, if empty will use the nic that the default route use
VERSION="v1.7.1"
```

After v1.6.0 `IFACE` support regex, e.g. `IFACE=enp6s0f0,eth.*`
This basic setup works for default overlay network. If you are using default underlay/vlan network, please refer [Vlan/Underlay Support](vlan.md)

3. Execute the script
3. Run the script

`bash install.sh`

That's all! You can now create some pods and test connectivity.

### Step by Step Install

If you want to know the detail steps to install Kube-OVN, please follow the steps.
The one-script installer is recommended. If you want to change the default options, follow the steps below.

For Kubernetes version before 1.17 please use the following command to add the node label

Expand All @@ -76,8 +79,6 @@ For Kubernetes version before 1.17 please use the following command to add the n

`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.7/yamls/kube-ovn.yaml`

That's all! You can now create some pods and test connectivity.

For high-available ovn db, see [high available](high-available.md)

If you want to enable IPv6 on default subnet and node subnet, please apply https://raw.githubusercontent.com/alauda/kube-ovn/release-1.7/yamls/kube-ovn-ipv6.yaml on Step 3.
Expand Down
5 changes: 2 additions & 3 deletions docs/internal-port.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,5 @@ metadata:
```

## Some limitation
The internal port name should be unique on a host and kubelet always check the `eth0` interface in the Pod.

To bypass this issue, Kube-OVN creates a dummy type device in Pod netns with the same ip address of internal port and set the eth0 down. It works well for most scenarios, however if applications rely on network interface name, it will bring confusions.
- The internal port name must be unique on a host and kubelet always checks the `eth0` interface in the Pod. To bypass this issue, Kube-OVN creates a dummy interface named eth0 in the Pod's netns, assigns the same IP address(es), and sets it down. It works well for most scenarios, however if applications rely on network interface name, it will bring confusions.
- After OVS restarts, internal ports will be deattach from the pod. Pods on the same node with internal-port interfaces should be recreated manually.
51 changes: 47 additions & 4 deletions docs/kubectl-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,15 @@ The following compatible plugins are available:
```text
kubectl ko {subcommand} [option...]
Available Subcommands:
[nb|sb] [status|kick|backup] ovn-db operations show cluster status, kick stale server or backup database
nbctl [ovn-nbctl options ...] invoke ovn-nbctl
sbctl [ovn-sbctl options ...] invoke ovn-sbctl
vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on selected node
tcpdump {namespace/podname} [tcpdump options ...] capture pod traffic
trace {namespace/podname} {target ip address} {icmp|tcp|udp} [target tcp or udp port]
vsctl {nodeName} [ovs-vsctl options ...] invoke ovs-vsctl on the specified node
ofctl {nodeName} [ovs-ofctl options ...] invoke ovs-ofctl on the specified node
dpctl {nodeName} [ovs-dpctl options ...] invoke ovs-dpctl on the specified node
appctl {nodeName} [ovs-appctl options ...] invoke ovs-appctl on the specified node
tcpdump {namespace/podname} [tcpdump options ...] capture pod traffic
trace {namespace/podname} {target ip address} {icmp|tcp|udp} [target tcp or udp port] trace ovn microflow of specific packet
diagnose {all|node} [nodename] diagnose connectivity of all nodes or a specific node
```

Expand Down Expand Up @@ -96,7 +100,7 @@ listening on d7176fe7b4e0_h, link-type EN10MB (Ethernet), capture size 262144 by
06:52:38.619973 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 3, length 64
```

3. Show ovn flow from a pod to a destination
3. Show ovn logical flow from a pod to a destination

```shell
[root@node2 ~]# kubectl ko trace default/ds1-l6n7p 8.8.8.8 icmp
Expand Down Expand Up @@ -179,3 +183,42 @@ I1008 07:05:06.415354 21692 ping.go:119] start to check dns connectivity
I1008 07:05:06.420595 21692 ping.go:129] resolve dns kubernetes.default.svc.cluster.local to [10.96.0.1] in 5.21ms
### finish diagnose node node3
```

5. Show OVN NB/SB cluster status
```shell
[root@node2 ~]# kubectl ko nb status
b9be
Name: OVN_Northbound
Cluster ID: 033e (033e333f-5031-465f-93af-7e1b2e3a82a0)
Server ID: b9be (b9be2f8e-3e4e-4374-93b7-297baf5724e7)
Address: tcp:[192.168.16.44]:6643
Status: cluster member
Role: leader
Term: 51
Leader: self
Vote: self

Last Election started 16222094 ms ago, reason: timeout
Last Election won: 16222094 ms ago
Election timer: 5000
Log: [50539, 50562]
Entries not yet committed: 0
Entries not yet applied: 0
Connections:
Disconnections: 0
Servers:
b9be (b9be at tcp:[192.168.16.44]:6643) (self) next_index=50539 match_index=50561
```

6. Back OVN NB/SB database
```shell
[root@node2 ~]# kubectl ko nb backup
backup ovn-nb db to /root/ovnnb_db.081616201629102000.backup
[root@node2 ~]# ls -l | grep ovn_nb
-rw-r--r-- 1 root root 31875 Aug 16 16:20 ovnnb_db.081616201629102000.backup
```

7. Remove stale NB/SB cluster member
```shell
[root@node2 ~]# kubectl ko nb kick aedds
```
23 changes: 21 additions & 2 deletions docs/mirror.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Kube-OVN support traffic mirroring that duplicates pod nic send/receive network

![alt text](mirror.png "kube-ovn mirror")

## Enable Traffic Mirror
## Enable Global Level Traffic Mirror

Traffic mirror is disabled by default, you should add cmd args in cni-server when installing kube-ovn to enabled it.
- `--enable-mirror=true`: enable traffic mirror
Expand All @@ -14,4 +14,23 @@ Then you can use tcpdump or other tools to diagnose traffic from interface mirro

```bash
tcpdump -ni mirror0
```
```

## Enable Pod Level Traffic Mirror
*Supported from v1.8.0*

In Global Mirror mode, all Pod traffic will be mirrored. If you just want to mirror specific Pod traffic, you should disable the global mirror option and use the Pod level annotation to mirror specific traffic

```yaml
apiVersion: v1
kind: Pod
metadata:
name: mirror-pod
namespace: ls1
annotations:
ovn.kubernetes.io/mirror: "true"
spec:
containers:
- name: mirror-pod
image: nginx:alpine
```
12 changes: 6 additions & 6 deletions docs/multi-nic.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# IPAM for Multi Network Interface

From version 1.1, the IPAM part of Kube-OVN can provides subnet and static ip allocation functions to other CNI plugins, such as macvlan/vlan/host-device.
From v1.1, the IPAM part of Kube-OVN can provide subnet and static ip allocation functions to other CNI plugins, such as macvlan/vlan/host-device.

## How it works

By using [Intel Multus CNI](https://github.com/intel/multus-cni), we can attach multiple network interfaces into a Kubernetes Pod.
However, we still need some cluster-wide IPAM utilities to manage IP addresses for multi network to better mange other CNI plugins.
In Kube-OVN we already has CRDs like Subnet and IP and functions for advanced IPAM like ip reservation, random allocation, static allocation and so on.
In Kube-OVN, we already have CRDs like Subnet and IP and functions for advanced IPAM like ip reservation, random allocation, static allocation and so on.
We extend the Subnet to network providers other than ovn, so other CNI plugins can take use all the IPAM functions already exist in Kube-OVN.

### Work Flow
Expand Down Expand Up @@ -57,11 +57,11 @@ spec:

`server_socket`: Is the socket file that Kube-OVN plugin communicate with. Default location is `/run/openvswitch/kube-ovn-daemon.sock`

`provider`: The `<name>.<namespace>` of this NetworkAttachmentDefinition, Kube-OVN plugin will later use it to find the related subnet.
`provider`: The `<name>.<namespace>` of this NetworkAttachmentDefinition, Kube-OVN will later use it to find the related subnet.

### Create a Kube-OVN subnet

Create a Kube-OVN Subnet, set the desired cidr, exclude ips and the `provider` should be the related NetworkAttachmentDefinition.
Create a Kube-OVN Subnet, set the desired cidr, exclude ips and the `provider` should be the related NetworkAttachmentDefinition `<name>.<namespace>`.

```yaml
apiVersion: kubeovn.io/v1
Expand Down Expand Up @@ -151,9 +151,9 @@ spec:
image: nginx:alpine
```

# Multi kube-ovn network Interface
# Multi kube-ovn network Interfaces

Full support for multi kube-ovn networks is more than just IPAM.
Full support for multi kube-ovn networks is more than just IPAM, now the attachment network can also come from OVN.

## How to use it

Expand Down

0 comments on commit f03d435

Please sign in to comment.