Skip to content

Commit

Permalink
clean up all white noise
Browse files Browse the repository at this point in the history
Signed-off-by: Xiang Dai <long0dai@foxmail.com>
  • Loading branch information
daixiang0 committed Jun 12, 2020
1 parent faa8d82 commit a1f53e6
Show file tree
Hide file tree
Showing 12 changed files with 38 additions and 38 deletions.
16 changes: 8 additions & 8 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
In this version, Kube-OVN support vlan and dpdk type network interfaces for higher performance requirement.
Thanks for Intel and Ruijie Networks guys who contribute these features.

Previously to expose Pod IP to external network, admins have to manually add static routes.
Previously to expose Pod IP to external network, admins have to manually add static routes.
Now admins can try the new BGP features to dynamically announce routes to external network.

From this version, subnet CIDR can be changed after creation, and routes will be changed if gateway type is modified.
Expand All @@ -16,11 +16,11 @@ From this version, subnet CIDR can be changed after creation, and routes will be
* Subnet validator will check if subnet CIDR conflicts with svc or node CIDR
* Subnet CIDR can be changed after creation
* When subnet gateway changed, routes will aromatically changed


### Monitoring
* Check if dns and kubernetes svc exist
* Make grafana dashboard more sensitive to changes
* Make grafana dashboard more sensitive to changes

### Misc
* Patch upstream ovn to reduce lflow count
Expand Down Expand Up @@ -55,8 +55,8 @@ This release fix bugs found in v1.1.0.

## v1.1.0 -- 2020/04/07

In this version, we refactor IPAM to separate IP allocation logical from OVN.
On top of that we provide a general cluster wide IPAM utility for other CNI plugins.
In this version, we refactor IPAM to separate IP allocation logical from OVN.
On top of that we provide a general cluster wide IPAM utility for other CNI plugins.
Now other CNI plugins like macvlan/host-device/vlan etc can take advantage of subnet and static ip allocation functions in Kube-OVN.
Please check [this document](docs/multi-nic.md) to see how we combine Kube-OVN and Multus-CNI to provide multi-nic container network.

Expand Down Expand Up @@ -106,7 +106,7 @@ This release fix bugs found in v1.0.0

## v1.0.0 -- 2020/02/27

Kube-OVN has evolved a year from the first release and the core function set is stable with lots of tests and community feedback.
Kube-OVN has evolved a year from the first release and the core function set is stable with lots of tests and community feedback.
It's time to run Kube-OVN in production!

### Performance
Expand Down Expand Up @@ -231,7 +231,7 @@ This release is mainly about controller performance, stability and bugfix
### Mics
* Support hostport
* Update OVN/OVS to 2.11.3
* Update Go to 1.13
* Update Go to 1.13

## v0.7.0 -- 2019/08/21

Expand Down Expand Up @@ -340,7 +340,7 @@ This is a bugfix version
* IP/Mac static allocation
* Namespace bind subnet
* Namespaces share subnet
* Connectivity between node and pod
* Connectivity between node and pod
### Issues
* Pod can not access external network
* No HA for control plan
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ push-release: release
docker push ${REGISTRY}/kube-ovn:${RELEASE_TAG}

lint:
@gofmt -d ${GOFILES_NOVENDOR}
@gofmt -d ${GOFILES_NOVENDOR}
@gofmt -l ${GOFILES_NOVENDOR} | read && echo "Code differs from gofmt's style" 1>&2 && exit 1 || true
@GOOS=linux go vet ./...
@GOOS=linux gosec -exclude=G204 ./...
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,15 +96,15 @@ to some other options to give users a better understanding to assess which netwo

[ovn-kubernetes](https://github.com/ovn-org/ovn-kubernetes) is developed by the ovn community to integration ovn for Kubernetes. As both projects use OVN/OVS as the data plane, they have some same function sets and architecture. The main differences come from the network topology and gateway implementation.

ovn-kubernetes implements a subnet-per-node network topology.
That means each node will have a fixed cidr range, and the ip allocation is fulfilled by each node when the pod has been invoked by kubelet.
ovn-kubernetes implements a subnet-per-node network topology.
That means each node will have a fixed cidr range, and the ip allocation is fulfilled by each node when the pod has been invoked by kubelet.

Kube-OVN implements a subnet-per-namespace network topology.
That means a cidr can spread the entire cluster nodes, and the ip allocation is fulfilled by kube-ovn-controller at a central place. And then kube-ovn can apply lots of network configurations at subnet level, like cidr, gw, exclude_ips, nat and so on. This topology also gives Kube-OVN more ability to control how ip should be allocated, on top of this topology, Kube-OVN can allocate static ip for workloads.

We believe the subnet-per-namespace topology will give more flexibility to evolve the network.

On the gateway side, ovn-kubernetes uses native ovn gateway concept to control the traffic. The native ovn gateway relies on a dedicated nic or needs to transfer the nic ip to another device to bind the nic to the ovs bridge. This implementation can reach better performance, however not all environments meet the network requirements especially in the cloud.
On the gateway side, ovn-kubernetes uses native ovn gateway concept to control the traffic. The native ovn gateway relies on a dedicated nic or needs to transfer the nic ip to another device to bind the nic to the ovs bridge. This implementation can reach better performance, however not all environments meet the network requirements especially in the cloud.

Kube-OVN uses policy-route, ipset and iptables to implement the gateway functions that all by software, which can fit more infrastructure and give more flexibility to more function.

Expand Down
2 changes: 1 addition & 1 deletion dist/images/Dockerfile.dpdk1911
Original file line number Diff line number Diff line change
Expand Up @@ -74,4 +74,4 @@ COPY start-ovs-dpdk.sh ovs-dpdk-healthcheck.sh uninstall.sh /kube-ovn/

RUN rpm -ivh --nodeps /rpms/*.rpm && \
rm -rf ${DPDK_DIR} /rpms && \
unset DPDK_DIR OVS_DIR OVN_DIR
unset DPDK_DIR OVS_DIR OVN_DIR
2 changes: 1 addition & 1 deletion dist/images/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ then
exit 1
fi
;;
-?*)
-?*)
echo "Unknown argument $1"
exit 1
;;
Expand Down
4 changes: 2 additions & 2 deletions docs/bgp.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# BGP support

Kube-OVN supports broadcast pod ips to external networks by BGP protocol.
Kube-OVN supports broadcast pod ips to external networks by BGP protocol.
To enable BGP announce function, you need to install kube-ovn-speaker and annotate pods that need to be exposed.

## Install kube-ovn-speaker
Expand All @@ -15,7 +15,7 @@ wget https://github.com/alauda/kube-ovn/blob/master/yamls/speaker.yaml

```bash
--neighbor-address=10.32.32.1 # The router address that need to establish bgp peers
--neighbor-as=65030 # The AS of router
--neighbor-as=65030 # The AS of router
--cluster-as=65000 # The AS of container network
```

Expand Down
2 changes: 1 addition & 1 deletion docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ make release

## How to run e2e tests

Kube-OVN uses [KIND](https://kind.sigs.k8s.io/) to setup a local Kubernetes cluster
Kube-OVN uses [KIND](https://kind.sigs.k8s.io/) to setup a local Kubernetes cluster
and [Ginkgo](https://onsi.github.io/ginkgo/) as the test framework to run the e2e tests.

```
Expand Down
18 changes: 9 additions & 9 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ Kube-OVN includes two parts:
- Docker >= 1.12.6
- OS: CentOS 7.5/7.6/7.7, Ubuntu 16.04/18.04

*NOTE*
1. Ubuntu 16.04 users should build the related ovs-2.11.1 kernel module to replace the kernel built-in module
2. CentOS users should make sure kernel version is greater than 3.10.0-898 to avoid a kernel conntrack bug, see [here](https://bugs.launchpad.net/neutron/+bug/1776778)
*NOTE*
1. Ubuntu 16.04 users should build the related ovs-2.11.1 kernel module to replace the kernel built-in module
2. CentOS users should make sure kernel version is greater than 3.10.0-898 to avoid a kernel conntrack bug, see [here](https://bugs.launchpad.net/neutron/+bug/1776778)

## To Install

Expand Down Expand Up @@ -44,7 +44,7 @@ Kube-OVN provides a one script install to easily install a high-available, produ

If you want to know the detail steps to install Kube-OVN, please follow the steps.

For Kubernetes version before 1.17 please use the following command to add the node label
For Kubernetes version before 1.17 please use the following command to add the node label

1. Add the following label to the Node which will host the OVN DB and the OVN Control Plane:

Expand All @@ -58,7 +58,7 @@ For Kubernetes version before 1.17 please use the following command to add the n
4. Install the Kube-OVN Controller and CNI plugins:

`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release/1.1/yamls/kube-ovn.yaml`

That's all! You can now create some pods and test connectivity.

For high-available ovn db, see [high available](high-available.md)
Expand All @@ -78,19 +78,19 @@ You can use `--default-cidr` flags below to config default Pod CIDR or create a
--default-cidr: Default CIDR for Namespaces with no logical switch annotation, default: 10.16.0.0/16
--default-gateway: Default gateway for default-cidr, default the first ip in default-cidr
--default-exclude-ips: Exclude ips in default switch, default equals to gateway address

# Node Switch
--node-switch: The name of node gateway switch which help node to access pod network, default: join
--node-switch-cidr: The cidr for node switch, default: 100.64.0.0/16
--node-switch-gateway: The gateway for node switch, default the first ip in node-switch-cidr

# LoadBalancer
--cluster-tcp-loadbalancer: The name for cluster tcp loadbalancer, default cluster-tcp-loadbalancer
--cluster-udp-loadbalancer: The name for cluster udp loadbalancer, default cluster-udp-loadbalancer

# Router
--cluster-router: The router name for cluster router, default: ovn-cluster

# Misc
--worker-num: The parallelism of each worker, default: 3
--kubeconfig: Path to kubeconfig file with authorization and master location information. If not set use the inCluster token
Expand Down
2 changes: 1 addition & 1 deletion docs/kubectl-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ listening on d7176fe7b4e0_h, link-type EN10MB (Ethernet), capture size 262144 by
06:52:37.619630 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 2, length 64
06:52:38.619933 IP 100.64.0.3 > 10.16.0.4: ICMP echo request, id 2, seq 3, length 64
06:52:38.619973 IP 10.16.0.4 > 100.64.0.3: ICMP echo reply, id 2, seq 3, length 64
```
```

3. Show ovn flow from a pod to a destination

Expand Down
14 changes: 7 additions & 7 deletions docs/multi-nic.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
# IPAM for Multi Network Interface

From version 1.1, the IPAM part of Kube-OVN can provides subnet and static ip allocation functions to other CNI plugins, such as macvlan/vlan/host-device.
From version 1.1, the IPAM part of Kube-OVN can provides subnet and static ip allocation functions to other CNI plugins, such as macvlan/vlan/host-device.

## How it works

By using [Intel Multus CNI](https://github.com/intel/multus-cni), we can attach multiple network interfaces into a Kubernetes Pod.
By using [Intel Multus CNI](https://github.com/intel/multus-cni), we can attach multiple network interfaces into a Kubernetes Pod.
However, we still need some cluster-wide IPAM utilities to manage IP addresses for multi network to better mange other CNI plugins.
In Kube-OVN we already has CRDs like Subnet and IP and functions for advanced IPAM like ip reservation, random allocation, static allocation and so on.
In Kube-OVN we already has CRDs like Subnet and IP and functions for advanced IPAM like ip reservation, random allocation, static allocation and so on.
We extend the Subnet to network providers other than ovn, so other CNI plugins can take use all the IPAM functions already exist in Kube-OVN.

### Work Flow

The diagram below shows how Kube-OVN allocate address for other CNI plugins. The default ovn eth0 network still goes the same way as before.
The net1 network comes from the NetworkAttachmentDefinition defined by multus-cni.
The diagram below shows how Kube-OVN allocate address for other CNI plugins. The default ovn eth0 network still goes the same way as before.
The net1 network comes from the NetworkAttachmentDefinition defined by multus-cni.
When a new pod appears, the kube-ovn-controller will read the pod annotations and find an available address then write it to the pod annotations.
Then on the CNI side, the attached CNI plugins can chain kube-ovn-ipam as the ipam plugin, which will read the pod annotations above and return the allocated address to the attached CNI plugins.

### Limitation
Kube-OVN now uses ovn network as the pod default network, other network can only act as network attachments.
We will fully separate the IPAM functions to provide a more general IPAM later.
Kube-OVN now uses ovn network as the pod default network, other network can only act as network attachments.
We will fully separate the IPAM functions to provide a more general IPAM later.

![topology](multi-nic.png "kube-ovn network topology")

Expand Down
6 changes: 3 additions & 3 deletions docs/subnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ spec:
```
## Basic Configuration

- `protocol`: The ip protocol ,can be IPv4 or IPv6. *Note*: Through kube-ovn support both protocol subnets coexist in a cluster, kubernetes control plan now only support one protocol. So you will lost some ability like probe and service discovery if you use a protocol other than the kubernetes control plan.
- `protocol`: The ip protocol ,can be IPv4 or IPv6. *Note*: Through kube-ovn support both protocol subnets coexist in a cluster, kubernetes control plan now only support one protocol. So you will lost some ability like probe and service discovery if you use a protocol other than the kubernetes control plan.
- `default`: If set true, all namespaces that not bind to any subnets will use this subnet to allocate pod ip and share other network configuration. Note: Kube-OVN will create a default subnet and set this field to true. There can only be one default subnet in a cluster.
- `namespaces`: List of namespaces that bind to this subnet. If you want to bind a namespace to this subnet, edit and add the namespace name to this field.
- `cidrBlock`: The cidr of this subnet.
Expand All @@ -38,7 +38,7 @@ spec:

## Isolation

Besides standard NetworkPolicy,Kube-OVN also supports network isolation and access control at the Subnet level to simplify the use of access control.
Besides standard NetworkPolicy,Kube-OVN also supports network isolation and access control at the Subnet level to simplify the use of access control.

*Note*: NetworkPolicy take a higher priority than subnet isolation rules.

Expand All @@ -47,7 +47,7 @@ Besides standard NetworkPolicy,Kube-OVN also supports network isolation and ac

## Gateway

Gateway is used to enable external network connectivity for Pods within the OVN Virtual Network.
Gateway is used to enable external network connectivity for Pods within the OVN Virtual Network.

Kube-OVN supports two kinds of Gateways: the distributed Gateway and the centralized Gateway. Also user can expose pod ip directly to external network.

Expand Down
2 changes: 1 addition & 1 deletion docs/vlan.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
**The Vlan support is still in early stage, the usage might change later**

By default, Kube-OVN use Geneve to encapsulate packets between hosts, which will build an overlay network above your infrastructure.
Kube-OVN also support underlay Vlan mode network for better performance and throughput.
Kube-OVN also support underlay Vlan mode network for better performance and throughput.
In Vlan mode, the packets will send directly to physical switches with vlan tags.

To enable Vlan mode, a dedicated network interface is required by container network.
Expand Down

0 comments on commit a1f53e6

Please sign in to comment.